66 research outputs found
Uncertainties on mean areal precipitation: assessment and impact on streamflow simulations
International audienceThis paper investigates the influence of mean areal rainfall estimation errors on a specific case study: the use of lumped conceptual rainfall-runoff models to simulate the flood hydrographs of three small to medium-sized catchments of the upper Loire river. This area (3200 km2) is densely covered by an operational network of stream and rain gauges. It is frequently exposed to flash floods and the improvement of flood forecasting models is then a crucial concern. Particular attention has been drawn to the development of an error model for rainfall estimation consistent with data in order to produce realistic streamflow simulation uncertainty ranges. The proposed error model combines geostatistical tools based on kriging and an autoregressive model to account for temporal dependence of errors. It has been calibrated and partly validated for hourly mean areal precipitation rates. Simulated error scenarios were propagated into two calibrated rainfall-runoff models using Monte Carlo simulations. Three catchments with areas ranging from 60 to 3200 km2 were tested to reveal any possible links between the sensitivity of the model outputs to rainfall estimation errors and the size of the catchment. The results show that a large part of the rainfall-runoff (RR) modelling errors can be explained by the uncertainties on rainfall estimates, especially in the case of smaller catchments. These errors are a major factor limiting accuracy and sharpness of rainfallrunoff simulations, and thus their operational use for flood forecasting
Recommended from our members
How do I know if my forecasts are better? Using benchmarks in Hydrological ensemble prediction
The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are ‘toughest to beat’ and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon.
Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better
In vivo modulation of acute phase response by pentoxifylline during sepsis
International audienc
- …