1,006 research outputs found

    Series distance - an intuitive metric to quantify hydrograph similarity in terms of occurrence, amplitude and timing of hydrological events

    Get PDF
    Applying metrics to quantify the similarity or dissimilarity of hydrographs is a central task in hydrological modelling, used both in model calibration and the evaluation of simulations or forecasts. Motivated by the shortcomings of standard objective metrics such as the Root Mean Square Error (RMSE) or the Mean Absolute Peak Time Error (MAPTE) and the advantages of visual inspection as a powerful tool for simultaneous, case-specific and multi-criteria (yet subjective) evaluation, we propose a new objective metric termed Series Distance, which is in close accordance with visual evaluation. The Series Distance quantifies the similarity of two hydrographs neither in a time-aggregated nor in a point-by-point manner, but on the scale of hydrological events. It consists of three parts, namely a Threat Score which evaluates overall agreement of event occurrence, and the overall distance of matching observed and simulated events with respect to amplitude and timing. The novelty of the latter two is the way in which matching point pairs on the observed and simulated hydrographs are identified: not by equality in time (as is the case with the RMSE), but by the same relative position in matching segments (rise or recession) of the event, indicating the same underlying hydrological process. Thus, amplitude and timing errors are calculated simultaneously but separately, from point pairs that also match visually, considering complete events rather than only individual points (as is the case with MAPTE). Relative weights can freely be assigned to each component of the Series Distance, which allows (subjective) customization of the metric to various fields of application, but in a traceable way. Each of the three components of the Series Distance can be used in an aggregated or non-aggregated way, which makes the Series Distance a suitable tool for differentiated, process-based model diagnostics. After discussing the applicability of established time series metrics for hydrographs, we present the Series Distance theory, discuss its properties and compare it to those of standard metrics used in Hydrology, both at the example of simple, artificial hydrographs and an ensemble of realistic forecasts. The results suggest that the Series Distance quantifies the degree of similarity of two hydrographs in a way comparable to visual inspection, but in an objective, reproducible way

    Quantitative precipitation estimation with weather radar using a data- and information-based approach

    Get PDF
    In this study we propose and demonstrate a data-driven approach in an “information-theoretic” framework to quantitatively estimate precipitation. In this context, predictive relations are expressed by empirical discrete probability distributions directly derived from data instead of fitting and applying deterministic functions, as is standard operational practice. Applying a probabilistic relation has the benefit of providing joint statements about rain rate and the related estimation uncertainty. The information-theoretic framework furthermore allows for the integration of any kind of data considered useful and explicitly considers the uncertain nature of quantitative precipitation estimation (QPE). With this framework we investigate the information gains and losses associated with various data and practices typically applied in QPE. To this end, we conduct six experiments using 4 years of data from six laser optical disdrometers, two micro rain radars (MRRs), regular rain gauges, weather radar reflectivity and other operationally available meteorological data from existing stations. Each experiment addresses a typical question related to QPE. First, we measure the information about ground rainfall contained in various operationally available predictors. Here weather radar proves to be the single most important source of information, which can be further improved when distinguishing radar reflectivity–ground rainfall relationships (Z–R relations) by season and prevailing synoptic circulation pattern. Second, we investigate the effect of data sample size on QPE uncertainty using different data-based predictive models. This shows that the combination of reflectivity and month of the year as a two-predictor model is the best trade-off between robustness of the model and information gain. Third, we investigate the information content in spatial position by learning and applying site-specific Z–R relations. The related information gains are only moderate; specifically, they are lower than when distinguishing Z–R relations according to time of the year or synoptic circulation pattern. Fourth, we measure the information loss when fitting and using a deterministic Z–R relation, as is standard practice in operational radar-based QPE applying, e.g., the standard Marshall–Palmer relation, instead of using the empirical relation derived directly from the data. It shows that while the deterministic function captures the overall shape of the empirical relation quite well, it introduces an additional 60 % uncertainty when estimating rain rate. Fifth, we investigate how much information is gained along the radar observation path, starting with reflectivity measured by radar at height, continuing with the reflectivity measured by a MRR along a vertical profile in the atmosphere and ending with the reflectivity observed by a disdrometer directly at the ground. The results reveal that considerable additional information is gained by using observations from lower elevations due to the avoidance of information losses caused by ongoing microphysical precipitation processes from cloud height to ground. This emphasizes both the importance of vertical corrections for accurate QPE and of the required MRR observations. In the sixth experiment we evaluate the information content of radar data only, rain gauge data only and a combination of both as a function of the distance between the target and predictor rain gauge. The results show that station-only QPE outperforms radar-only QPE up to a distance of 7 to 8 km from the nearest station and that radar–gauge QPE performs best, even compared with radar-based models applying season or circulation pattern

    Technical note: Complexity–uncertainty curve (c-u-curve) – a method to analyse, classify and compare dynamical systems

    Get PDF
    We propose and provide a proof of concept of a method to analyse, classify and compare dynamical systems of arbitrary dimensions by the two key features uncertainty and complexity. It starts by subdividing the system's time trajectory into a number of time slices. For all values in a time slice, the Shannon information entropy is calculated, measuring within-slice variability. System uncertainty is then expressed by the mean entropy of all time slices. We define system complexity as “uncertainty about uncertainty” and express it by the entropy of the entropies of all time slices. Calculating and plotting uncertainty “u” and complexity “c” for many different numbers of time slices yields the c-u-curve. Systems can be analysed, compared and classified by the c-u-curve in terms of (i) its overall shape, (ii) mean and maximum uncertainty, (iii) mean and maximum complexity and (iv) characteristic timescale expressed by the width of the time slice for which maximum complexity occurs. We demonstrate the method with the example of both synthetic and real-world time series (constant, random noise, Lorenz attractor, precipitation and streamflow) and show that the shape and properties of the respective c-u-curve clearly reflect the particular characteristics of each time series. For the hydrological time series, we also show that the c-u-curve characteristics are in accordance with hydrological system understanding. We conclude that the c-u-curve method can be used to analyse, classify and compare dynamical systems. In particular, it can be used to classify hydrological systems into similar groups, a pre-condition for regionalization, and it can be used as a diagnostic measure and as an objective function in hydrological model calibration. Distinctive features of the method are (i) that it is based on unit-free probabilities, thus permitting application to any kind of data, (ii) that it is bounded, (iii) that it naturally expands from single-variate to multivariate systems, and (iv) that it is applicable to both deterministic and probabilistic value representations, permitting e.g. application to ensemble model predictions.</p

    Extreme flood response to short-duration convective rainfall in South-West Germany

    Get PDF
    The 2 June 2008 flood-producing storm on the Starzel river basin in South-West Germany is examined as a prototype for organized convective systems that dominate the upper tail of the precipitation frequency distribution and are likely responsible for the flash flood peaks in Central Europe. The availability of high-resolution rainfall estimates from radar observations and a rain gauge network, together with indirect peak discharge estimates from a detailed post-event survey, provided the opportunity to study in detail the hydrometeorological and hydrological mechanisms associated with this extreme storm and the ensuing flood. Radar-derived rainfall, streamgauge data and indirect estimates of peak discharges are used along with a distributed hydrologic model to reconstruct hydrographs at multiple locations. Observations and model results are combined to examine two main questions, (i) assessment of the distribution of the runoff ratio for the 2008 flash flood and how it compares with other less severe floods; and (ii) analysis of how the spatial and temporal distribution of the extreme rainfall, and more specifically storm motion, controls the flood response. It is shown that small runoff ratios (less than 20 %) characterized the runoff response and that these values are in the range of other, less extreme, flood events. The influence of storm structure, evolution and motion on the modeled flood hydrograph is examined by using the “spatial moments of catchment rainfall”. It is shown that downbasin storm motion (in the range of 0.7–0.9ms−1) had a noticeable impact on flood response by increasing the modeled flood peak by 13 %

    Identifying rainfall-runoff events in discharge time series: a data-driven method based on information theory

    Get PDF
    In this study, we propose a data-driven approach for automatically identifying rainfall-runoff events in discharge time series. The core of the concept is to construct and apply discrete multivariate probability distributions to obtain probabilistic predictions of each time step that is part of an event. The approach permits any data to serve as predictors, and it is non-parametric in the sense that it can handle any kind of relation between the predictor(s) and the target. Each choice of a particular predictor data set is equivalent to formulating a model hypothesis. Among competing models, the best is found by comparing their predictive power in a training data set with user-classified events. For evaluation, we use measures from information theory such as Shannon entropy and conditional entropy to select the best predictors and models and, additionally, measure the risk of overfitting via cross entropy and Kullback–Leibler divergence. As all these measures are expressed in “bit”, we can combine them to identify models with the best tradeoff between predictive power and robustness given the available data. We applied the method to data from the Dornbirner Ach catchment in Austria, distinguishing three different model types: models relying on discharge data, models using both discharge and precipitation data, and recursive models, i.e., models using their own predictions of a previous time step as an additional predictor. In the case study, the additional use of precipitation reduced predictive uncertainty only by a small amount, likely because the information provided by precipitation is already contained in the discharge data. More generally, we found that the robustness of a model quickly dropped with the increase in the number of predictors used (an effect well known as the curse of dimensionality) such that, in the end, the best model was a recursive one applying four predictors (three standard and one recursive): discharge from two distinct time steps, the relative magnitude of discharge compared with all discharge values in a surrounding 65&thinsp;h time window and event predictions from the previous time step. Applying the model reduced the uncertainty in event classification by 77.8&thinsp;%, decreasing conditional entropy from 0.516 to 0.114 bits. To assess the quality of the proposed method, its results were binarized and validated through a holdout method and then compared to a physically based approach. The comparison showed similar behavior of both models (both with accuracy near 90&thinsp;%), and the cross-validation reinforced the quality of the proposed model. Given enough data to build data-driven models, their potential lies in the way they learn and exploit relations between data unconstrained by functional or parametric assumptions and choices. And, beyond that, the use of these models to reproduce a hydrologist's way of identifying rainfall-runoff events is just one of many potential applications.</p

    Gaming with eutrophication: Contribution to integrating water quantity and quality management at catchment level

    Full text link
    The Metropolitan Region of Sao Paulo (MRSP) hosts 18 million inhabitants. A complex system of 23 interconnected reservoirs was built to ensure its water supply. Half of the potable water produced for MRSP's population (35 m3/s) is imported from a neighbour catchment, the other half is produced within the Alto TietĂȘ catchment, where 99% of the population lives. Perimeters of land use restriction were defined to contain uncontrolled urbanization, as domestic effluents were causing increasing eutrophication of some of these reservoirs. In the 90's catchment committees and sub committees were created to promote discussion between stakeholders and develop catchment plans. The committees are very well structured "on paper". However, they are not very well organised and face a lack of experience. The objective of this work was to design tools that would strengthen their discussion capacities. The specific objective of the AguAloca process was to integrate the quality issue and its relation to catchment management as a whole in these discussions. The work was developed in the Alto TietĂȘ Cabeceiras sub-catchment, one of the 5 sub catchments of the Alto-TietĂȘ. It contains 5 interconnected dams, and presents competitive uses such as water supply, industry, effluent dilution and irrigated agriculture. A RPG was designed following a companion modelling approach (Etienne et al., 2003). It contains a friendly game-board, a set of individual and collective rules and a computerized biophysical model. The biophysical model is used to simulate water allocation and quality processes at catchment level. It articulates 3 modules. A simplified nutrient discharge model permits the estimation of land use nutrient exportation. An arc-node model simulates water flows and associated nutrient charges from one point of the hydrographical network to another. The Vollenweider model is used for simulating specific reservoir dynamics. The RPG allows players to make individual and collective decisions related to water allocation and the management of its quality. Impacts of these decisions are then simulated using the biophysical model. Specific indicators of the game are then updated and may influence player's behaviour (actions) in following rounds. To introduce discussions on the management of water quality at a catchment level, an issue that is rarely explicitly dealt with, four game sessions were implemented involving representatives of basin committees and water and sanitation engineers. During the game session, the participants took advantage of the water quality output of the biophysical model to test management alternatives such as rural sewage collection or effluent dilution. The biophysical model accelerated calculations of flows and eutrophication rates that were then returned to the game board with explicit indicators of quantity and quality. Players could easily test decisions impacting on qualitative water processes and visualize the simulation results directly on the game board that was representing a friendly, virtual and simplified catchment. The Agualoca game proved its ability to turn complex water processes understandable for a non totally initiated public. This experience contributed to a better understanding of multiple-use water management and also of joint management of water quality and quantity. (RĂ©sumĂ© d'auteur

    First airborne water vapor lidar measurements in the tropical upper troposphere and mid-latitudes lower stratosphere: accuracy evaluation and intercomparisons with other instruments

    Get PDF
    In the tropics, deep convection is the major source of uncertainty in water vapor transport to the upper troposphere and into the stratosphere. Although accurate measurements in this region would be of first order importance to better understand the processes that govern stratospheric water vapor concentrations and trends in the context of a changing climate, they are sparse because of instrumental shortcomings and observational challenges. Therefore, the Falcon research aircraft of the Deutsches Zentrum fĂŒr Luft- und Raumfahrt (DLR) flew a zenith-viewing water vapor differential absorption lidar (DIAL) during the Tropical Convection, Cirrus and Nitrogen Oxides Experiment (TROCCINOX) in 2004 and 2005 in Brazil. The measurements were performed alternatively on three water vapor absorption lines of different strength around 940 nm. These are the first aircraft DIAL measurements in the tropical upper troposphere and in the mid-latitudes lower stratosphere. Sensitivity analyses reveal an accuracy of 5% between altitudes of 8 and 16 km. This is confirmed by intercomparisons with the Fast In-situ Stratospheric Hygrometer (FISH) and the Fluorescent Advanced Stratospheric Hygrometer (FLASH) onboard the Russian M-55 Geophysica research aircraft during five coordinated flights. The average relative differences between FISH and DIAL amount to &amp;minus;3%&amp;plusmn;8% and between FLASH and DIAL to &amp;minus;8%&amp;plusmn;14%, negative meaning DIAL is more humid. The average distance between the probed air masses was 129 km. The DIAL is found to have no altitude- or latitude-dependent bias. A comparison with the balloon ascent of a laser absorption spectrometer gives an average difference of 0%&amp;plusmn;19% at a distance of 75 km. Six tropical DIAL under-flights of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) on board ENVISAT reveal a mean difference of &amp;minus;8%&amp;plusmn;49% at an average distance of 315 km. While the comparison with MIPAS is somewhat less significant due to poorer comparison conditions, the agreement with the in-situ hygrometers provides evidence of the excellent quality of FISH, FLASH and DIAL. Most DIAL profiles exhibit a smooth exponential decrease of water vapor mixing ratio in the tropical upper troposphere to lower stratosphere transition. The hygropause with a minimum mixing ratio of 2.5 &amp;micro;mol/mol is found between 15 and 17 km. A high-resolution (2 km horizontal, 0.2 km vertical) DIAL cross section through the anvil outflow of tropical convection shows that the ambient humidity is increased by a factor of three across 100 km
    • 

    corecore