Hydrological models are computer codes that determine the amount of streamflow for a given rainfall event. These models require some tuning to match the calculated streamflow with the observed streamflow. Once they have been tuned, these models can be used to predict high or low flows.
Normally, multiple years of streamflow measurements are used to calibrate a hydrological model for a specific catchment so that it can be used to, for instance, predict floods or droughts. Taking these measurements is expensive and requires a lot of effort. Therefore, such data are often missing, especially in remote areas and developing countries. We investigated the potential value of water level class (WL‐class) data for model calibration. WL‐classes can be observed by citizens with the help of a virtual ruler with different classes that is pasted onto a picture of a stream bank as a sticker. We show that one WL‐class observation per week for 1 year improves model calibration compared to situations without streamflow data. The model results for the WL‐class observations were as good as precise water level observations that require a physical staff gauge or continuous water level data measurements that can be obtained from a water level sensor that is installed in the stream. However, the results were not as good as when streamflow data were used for model calibration, but these are more expensive to collect. Errors in the WL‐class observations did in most cases not affect the model performance noticeably.
Etter, S., Strobl, B., Seibert, J., & van Meerveld, H. J.: Value of crowd‐based water level class observations for hydrological model calibration. Water Resources Research, 56, e2019WR026108. https://doi.org/10.1029/2019WR026108, 2020.
The CrowdWater game checks the quality of the water level class observations submitted by citizen hydrologists using the CrowdWater app. Players compare two photos submitted via the Crowdwater app, namely the original photo with the virtual staff gauge and another one taken at the same location at a later time. The player votes on a water level class by comparing the water level on the new photo to the virtual staff gauge on the original photo. Each observation is shown to several players and will therefore receives multiple votes. The average water level class for all votes from the different players can then be compared to the value, which was reported by the citizen scientists who submitted the photo via the app. In this way, the CrowdWater game can be used to confirm or correct the value that was submitted through the app. In the study presented in this paper, we describe the game and demonstrate its value for data quality control.
Are you curious about the game or do you want to help to check and improve the quality of the CrowdWater app data? You can play the game here: www.crowdwater.ch/crowdwater-game/
Strobl, B., Etter, S., van Meerveld, I., Seibert, J.: The CrowdWater game: A playful way to improve the accuracy of crowdsourced water level class data – PLoS One, https://doi.org/10.1371/journal.pone.0222579, 2019.
Seibert, J., van Meerveld, H.J., Etter, S., Strobl, B., Assendelft, R., Hummer, P.: Wasserdaten sammeln mit dem Smartphone – Wie können Menschen messen, was hydrologische Modelle brauchen? – Hydrologie & Wasserbewirtschaftung, 63, (2), https://doi.org/10.5675/HyWa_2019.2_1, 2019
We asked people who walked by a river to estimate the streamflow. We did this for ten different rivers and for high and low flows conditions. In this paper, we describe how well the participants could estimate the streamflow by comparing their estimates to the measured streamflow. The errors in the estimates of the streamflow were sometimes very big but the median value of all estimates was surprisingly close to the measured value. We also asked the participants to estimate the stream level class. We showed them a picture of the stream taken at an earlier time with a measurement stick (called a staff gauge) digitally placed onto the picture like a sticker. The staff gauge was divided into 10 classes. The participants had to estimate in which of these classes the water level was now by comparing the current water level and the stream in the picture. Most participants could estimate the stream level class well. Only a few participants chose a water level class that was more than one class away from the correct class. We therefore recommended that citizen science projects use stream level classes instead of streamflow estimates. Streamflow can then be derived by hydrological modelling based on the stream level data.
Strobl, B., Etter, S., van Meerveld, I. and Seibert, J.: Accuracy of crowdsourced streamflow and stream level class estiamtes, Hydrological Sciences Journal, Special Issue: Hydrological Data: Opportunities and Barriers, https://doi.org/10.1080/02626667.2019.1578966, 2019.
In this study, we test if estimates of streamflow from citizens (rather than actual measurements by government agencies) can be used for the tuning of a hydrological model. Because we didn’t have enough data from the CrowdWater app yet, we created artificial streamflow datasets with data points at different times (for example, one data point per week or one per month) and added different errors to the data. To determine the typical errors in streamflow estimates, we asked 136 people in the Zurich area to estimate the streamflow and compared their estimates to the measured streamflow. We determined for six catchments how the errors in the streamflow estimates and the number of data points affect how well we can tune the model for these catchments. The results show that the streamflow estimates of untrained citizens are too inaccurate to be useful for the tuning of a model. However, if the errors can be reduced (by about half) through training or filtering, their estimates of streamflow are useful when there is on average one streamflow estimate per week. Then, the model can be used, in combination with a weather forecast, for flood predictions.
Etter, S., Strobl, B., Seibert, J., and van Meerveld, H. J. I.: Value of uncertain streamflow observations for hydrological modelling, Hydrol. Earth Syst. Sci., 22, 5243-5257, https://doi.org/10.5194/hess-22-5243-2018, 2018.
Kampf, S., B. Strobl, J. Hammond, A. Anenberg, S. Etter, C. Martin, K. Puntenney-Desmond, J. Seibert, and I. van Meerveld (2018), Testing the waters: Mobile apps for crowdsourced streamflow data, Eos, 99, https://doi.org/10.1029/2018EO096355. Published on 12 April 2018.
Catchment Science Gordon Research Conference and Seminar – Ilja van Meerveld
Can citizens observe what models need? Evaluation of the potential value of crowd-sourced stream level observations for hydrological model calibration
Österreichische Citizen Science Konferenz 2018 – Barbara Strobl
CrowdWater als Bereicherung des Unterrichts?
Tag der Hydrologie 2018 – Jan Seibert
CrowdWater – Können Menschen messen was hydrologische Modelle brauchen?
This poster has won the poster price 2018 in the category “most innovative study”.
EGU 2018 – Simon Etter
Can citizens observe what models need?
MOOC stands for massive open online course. Like in a traditional university course, learners study a subject over a specific time period. However, students attain lectures, discuss problems and solve exercises online. In the MOOC «Water in Switzerland» learners can watch a choice of lectures and field films, as well as solve assessments and practical tasks. The MOOC is split up into seven modules, each of them taking approximately 3 – 4 hours of work each week.
CrowdWater participated in this MOOC.