Temimi and Liu Awarded $870,614 United States Geological Survey Grant to Develop System for Faster, Safer Streamflow Monitoring of American Riverways
The civil engineering researchers will apply expertise in remote sensing, computer vision, machine learning and AI to leverage an existing network of river cameras across the U.S.
With more than 250,000 rivers, the United States contains a total of 3.5 million miles of riverways. Yet current methods for measuring streamflow of these riverways are not benefiting from an existing network of river cameras that could help strengthen the operational monitoring of streamflow.
Streamflow, also known as discharge, is the speed and volume of water flowing through a particular location at a given period of time. Important for flood forecasting, water quality assessment, irrigation, dam management and recreational safety, streamflow is affected by a variety of changes and phenomena, including precipitation and drought, snowmelt, obstruction, sediment and erosion.
The standard method for observing streamflow in the U.S. begins with using either an automated electronic water gauge or even a simple ruler (called a staff gauge) mounted along the edge of a river that must be regularly viewed and recorded to determine water level. This measurement is compared to predetermined water level measurements, and calculations are run using site-specific plots called rating curves, which calculate the relationship between a river’s water level and its streamflow at a specific site.
Department of Civil, Environmental and Ocean Engineering Associate Professor Marouane Temimi.Approximately 500 United States Geological Survey (USGS) stations throughout the U.S., however, are equipped with cameras that take still and video images of waterways at regular intervals, then broadcast those images to the internet. Networked as part of a USGS national Hydrologic Imagery Visualization and Information System (HIVIS), these images contain valuable information of river hydraulic conditions that is currently being underutilized, according to
“USGS uses those cameras to visually inspect flow conditions of rivers,” he explained. “Our idea is to use those cameras, leveraging our techniques in computer vision, machine learning and artificial intelligence, to estimate water velocity and water level, and therefore determine streamflow rates in rivers.”
In partnership with USGS and the National Oceanic and Atmospheric Administration (NOAA), Temimi and Assistant Professor Kaijian Liu have been awarded a $870,614 grant for the project, titled “Leveraging USGS Hydrologic Imagery Visualization and Information System (HIVIS) for an Operational Monitoring of Streamflow using Computer Vision.”
The team will develop a computer-vision based system that integrates with USGS’s already-existing infrastructure that will automatically determine river conditions across the country based on analysis of USGS river videos and images, inferring and estimating streamflow in a faster, safer and more efficient manner.
The project falls under the larger umbrella of the Cooperative Institute for Research to Operations in Hydrology (CIROH), a national consortium established and funded by NOAA of 28 research institutions dedicated to the advancement and improvement of U.S. water resource monitoring, modeling and forecasting, of which Stevens is a founding member. CIROH supports and develops collaborative water science research that leverages data and computer science to advance research in the field of hydrology.
Previous attempts had been made to develop an automated streamflow analysis system similar to Temimi’s and Liu’s in the U.S., said Temimi, but none were developed to the point of operational deployment.
“We'll build upon what was proposed by previous researchers in the U.S. and also internationally, particularly in Europe, to advance the monitoring of streamflow and water conditions using cameras, images and videos together with the intention of bringing this application to operational level ultimately,” he said.
The team’s proposed system will acquire and analyze both still and video images from the real-time feed of the hundreds of USGS HIVIS cameras deployed throughout riverways across the U.S.
“We'll be using our knowledge in deep learning and computer vision to train networks that establish connections between the images that the cameras are capturing and streaming to us and what we call the segmented images,” Temimi said.
Starting with open-source, pre-trained networks, the team will improve and adapt the network to perform automated segmentation of received image frames, training them to determine conditions in rivers.
Obstacles such as large fallen tree branches or the presence of ice, for example, affect water level readings and therefore streamflow calculations. Currently such interruptive phenomena, however, must be observed by a human in order to trigger a recalculation, and such observations may not happen until days after a preliminary estimation has been made.
Although USGS is able to revise their preliminary observational report after the fact, such a delay in data accuracy in emergency conditions — such as during extreme weather or flooding — could mean the difference between the safe and timely evacuation of an area or not.
Stevens’ solution will be trained to detect the presence of foreign objects such as ice automatically, performing what’s called an automated segmentation of the still images or video frames to properly recognize, divide and categorize different types of objects within the image.
“The camera will determine that there is ice and therefore will correct the measurements automatically without doing it offline later on,” Temimi said.
Using a combination of machine learning and surface velocity measurement methods, changes are detected between the segmented images to infer water level, surface velocity (how quickly the topmost layer of water is moving) and, ultimately, streamflow values. The ability to recognize flow conditions and adjust calculations accordingly will allow for streamflow measurements with higher accuracy at a much faster speed.
Although increases in processing time will depend on a number of factors, including the frequency of video and photo acquisitions and the computer hardware that will be in use, Temimi said the goal is to “make sure that data is processed as fast as we can and made available to USGS and other users promptly to minimize latency.”
The team’s operational prototype will run automatically and continuously, allowing for accurate monitoring of U.S. waterway conditions in near-real-time. This automatic extraction and processing of visual information will benefit both USGS and NOAA.
In addition to enhancing USGS’s capabilities of measuring, monitoring and analyzing current water conditions, Temimi and Liu’s system will make it possible for USGS to deploy cameras in areas where traditional water gauges cannot be used.
“For a gauge to determine water level, it has to be in contact with the water. Whereas with a camera, you can observe the river from a distance,” Temimi explained. “Logistically speaking, it's easier to deploy and monitor conditions when you are distant. It’s also safer for operators to visit the camera site than the water level site, especially during flooding conditions when observations and water level measurements are needed but may be in an area that is flooded, inundated, inaccessible or damaged.”
This wealth of faster, more accurate information, Temimi said, will also assist NOAA with validating, calibrating and fine-tuning their forecasting models.
“The project will have a strong impact in terms of improving capabilities of the models for predicting flood inundation and improving the response to weather hazards,” Temimi said. “Because when they better identify the conditions right now, they can have better forecasts in the future.”
The project builds on previous research by Temimi and Liu in image processing, remote sensing, artificial intelligence, machine learning and streamflow-measuring. The project team will include a current Ph.D. student and a recent Stevens Ph.D. graduate with expertise in hydrometeorology and machine learning.
Future considerations for the system, according to Temimi, include the integration of photos and videos captured in real-time by everyday citizens to measure streamflow and hydrologic conditions, to expand citizen science opportunities and the pool of data available to USGS and NOAA simultaneously.