Here you can find all publications of the group hydrology & climate with links to ZORA.

Etter, S., Strobl, B., van Meerveld, I., Seibert, J. (2020): Quality and timing of crowd-based water level class observations. Hydrological Processes,

In this study, we checked how well the observations of the category “virtual staff gauge” reflect actual variations in stream levels and when citizen scientists are most likely to report water level class observations. The reported variations in water level classes matched the measured variations in stream level well. The match was better for data collected with the app than for data collected by many different citizen scientists on forms. Most water level class observations were submitted between May and September but they covered almost the full range of stream level conditions. These positive results demonstrate that the smartphone application and the virtual staff gauge can be used to collect useful data on stream level variations. The approach can therefore be used to augment existing streamflow monitoring networks and allow data collection in regions where otherwise no stream level data would be available.

Link to the paper /  Link to the preprint out soon

Strobl, B., Etter, S., van Meerveld, I., Seibert, J. (2020): Training citizen scientists through an online game developed for data quality control. Geoscience Communication,, 2020.

In a previous publication we introduced the CrowdWater game, which checks the quality of the water level class observations submitted by citizen hydrologists using the CrowdWater app. We found that in addition to the quality-control, the CrowdWater game also trains new citizen scientists to better place the virtual staff gauge in the CrowdWater app. The CrowdWater game shows many different virtual staff gauge placements and citizen scientists can gradually learn the benefits and limitations of these placements. If you are already an expert with virtual staff gauges please feel free to still play the CrowdWater game regularly, as this helps us to quality-control the CrowdWater observations.

Link to the paper / Link to the preprint out soon

Strobl, B., Etter, S., van Meerveld, I., Seibert, J. (2020): Accuracy of crowdsourced streamflow and stream level class estimates. Hydrological Sciences Journal, Special Issue: Hydrological Data: Opportunities and Barriers, 65(5),

We asked people who walked by a river to estimate the streamflow. We did this for ten different rivers and for high and low flows conditions. In this paper, we describe how well the participants could estimate the streamflow by comparing their estimates to the measured streamflow. The errors in the estimates of the streamflow were sometimes very big but the median value of all estimates was surprisingly close to the measured value. We also asked the participants to estimate the stream level class. We showed them a picture of the stream taken at an earlier time with a measurement stick (called a staff gauge) digitally placed onto the picture like a sticker. The staff gauge was divided into 10 classes. The participants had to estimate in which of these classes the water level was now by comparing the current water level and the stream in the picture. Most participants could estimate the stream level class well. Only a few participants chose a water level class that was more than one class away from the correct class. We therefore recommended that citizen science projects use stream level classes instead of streamflow estimates. Streamflow can then be derived by hydrological modelling based on the stream level data.

Link to the paper / Link to the preprint

Etter, S., Strobl, B., Seibert, J., van Meerveld, H. J. (2020): Value of crowd‐based water level class observations for hydrological model calibration. Water Resources Research, 56(2),

Normally, multiple years of streamflow measurements are used to calibrate a hydrological model for a specific catchment so that it can be used to, for instance, predict floods or droughts. Taking these measurements is expensive and requires a lot of effort. Therefore, such data are often missing, especially in remote areas and developing countries. We investigated the potential value of water level class (WL‐class) data for model calibration. WL‐classes can be observed by citizens with the help of a virtual ruler with different classes that is pasted onto a picture of a stream bank as a sticker. We show that one WL‐class observation per week for 1 year improves model calibration compared to situations without streamflow data. The model results for the WL‐class observations were as good as precise water level observations that require a physical staff gauge or continuous water level data measurements that can be obtained from a water level sensor that is installed in the stream. However, the results were not as good as when streamflow data were used for model calibration, but these are more expensive to collect. Errors in the WL‐class observations did in most cases not affect the model performance noticeably.

Link to the paper / Link to the preprint

Strobl, B., Etter, S., van Meerveld, I., Seibert, J. (2019): The CrowdWater game: A playful way to improve the accuracy of crowdsourced water level class data. PLoS One, 14(9),

The CrowdWater game checks the quality of the water level class observations submitted by citizen hydrologists using the CrowdWater app. Players compare two photos submitted via the Crowdwater app, namely the original photo with the virtual staff gauge and another one taken at the same location at a later time. The player votes on a water level class by comparing the water level on the new photo to the virtual staff gauge on the original photo. Each observation is shown to several players and will therefore receives multiple votes. The average water level class for all votes from the different players can then be compared to the value, which was reported by the citizen scientists who submitted the photo via the app. In this way, the CrowdWater game can be used to confirm or correct the value that was submitted through the app. In the study presented in this paper, we describe the game and demonstrate its value for data quality control.

Are you curious about the game or do you want to help to check and improve the quality of the CrowdWater app data? You can play the game here:

Link to the paper / Link to the preprint

Seibert, J., Strobl, B., Etter, S., Hummer, P., van Meerveld, H.J.I. (2019): Virtual staff gauges for crowd-based stream level observations. Frontiers in Earth Science – Hydrosphere, 7(70),

Link to the paper / Link to the preprint

Seibert, J., van Meerveld, H.J., Etter, S., Strobl, B., Assendelft, R., Hummer, P. (2019): Wasserdaten sammeln mit dem Smartphone – Wie können Menschen messen, was hydrologische Modelle brauchen? Hydrologie & Wasserbewirtschaftung, 63(2), 74-84,

Link to the paper / Link to the preprint

Etter, S., Strobl, B., Seibert, J., van Meerveld, H. J. I. (2018): Value of uncertain streamflow observations for hydrological modelling. Hydrol. Earth Syst. Sci., 22(10), 5243-5257,

In this study, we test if estimates of streamflow from citizens (rather than actual measurements by government agencies) can be used for the tuning of a hydrological model. Because we didn’t have enough data from the CrowdWater app yet, we created artificial streamflow datasets with data points at different times (for example, one data point per week or one per month) and added different errors to the data. To determine the typical errors in streamflow estimates, we asked 136 people in the Zurich area to estimate the streamflow and compared their estimates to the measured streamflow. We determined for six catchments how the errors in the streamflow estimates and the number of data points affect how well we can tune the model for these catchments. The results show that the streamflow estimates of untrained citizens are too inaccurate to be useful for the tuning of a model. However, if the errors can be reduced (by about half) through training or filtering, their estimates of streamflow are useful when there is on average one streamflow estimate per week. Then, the model can be used, in combination with a weather forecast, for flood predictions.

Link to the paper / Link to the preprint

Kampf, S., Strobl, B., Hammond, J., Anenberg, A.,Etter, S., Martin, C., Puntenney-Desmond, K., Seibert, J., van Meerveld, I. (2018): Testing the waters: Mobile apps for crowdsourced streamflow data. Eos, 99,

Link to the paper

van Meerveld, H. J. I., Vis, M. J. P., Seibert, J. (2017): Information content of stream level class data for hydrological model calibration. Hydrol. Earth Syst. Sci., 21(9), 4895-4905,

Link to the paper / Link to the preprint


Catchment Science Gordon Research Conference and Seminar – Ilja van Meerveld
Can citizens observe what models need? Evaluation of the potential value of crowd-sourced stream level observations for hydrological model calibration

Österreichische Citizen Science Konferenz 2018 – Barbara Strobl
CrowdWater als Bereicherung des Unterrichts?

Tag der Hydrologie 2018 – Jan Seibert
CrowdWater – Können Menschen messen was hydrologische Modelle brauchen?
This poster has won the poster price 2018 in the category “most innovative study”.

EGU 2018 – Simon Etter
Can citizens observe what models need?


MOOC stands for massive open online course. Like in a traditional university course, learners study a subject over a specific time period. However, students attain lectures, discuss problems and solve exercises online. In the MOOC «Water in Switzerland» learners can watch a choice of lectures and field films, as well as solve assessments and practical tasks. The MOOC is split up into seven modules, each of them taking approximately 3 – 4 hours of work each week.

CrowdWater participated in this MOOC.

Click here to see a trailer for this MOOC or the course website.