82 research outputs found

    Assessing local and spatial uncertainty with nonparametric geostatistics

    Get PDF
    Uncertainty quantification is an important topic for many environmental studies, such as identifying zones where potentially toxic materials exist in the soil. In this work, the nonparametric geostatistical framework of histogram via entropy reduction (HER) is adapted to address local and spatial uncertainty in the context of risk of soil contamination. HER works with empirical probability distributions, coupling information theory and probability aggregation methods to estimate conditional distributions, which gives it the flexibility to be tailored for different data and application purposes. To explore how HER can be used for estimating threshold-exceeding probabilities, it is applied to map the risk of soil contamination by lead in the well-known dataset of the region of Swiss Jura. Its results are compared to indicator kriging (IK) and to an ordinary kriging (OK) model available in the literature. For the analyzed dataset, IK and HER predictions achieve the best performance and exhibit comparable accuracy and precision. Compared to IK, advantages of HER for uncertainty estimation in a fine resolution are that it does not require modeling of multiple indicator variograms, correcting order-relation violations, or defining interpolation/extrapolation of distributions. Finally, to avoid the well-known smoothing effect when using point estimations (as is the case with both kriging and HER), and to provide maps that reflect the spatial fluctuation of the observed reality, we demonstrate how HER can be used in combination with sequential simulation to assess spatial uncertainty (uncertainty jointly over several locations)

    Technical note: Complexity–uncertainty curve (c-u-curve) – a method to analyse, classify and compare dynamical systems

    Get PDF
    We propose and provide a proof of concept of a method to analyse, classify and compare dynamical systems of arbitrary dimensions by the two key features uncertainty and complexity. It starts by subdividing the system\u27s time trajectory into a number of time slices. For all values in a time slice, the Shannon information entropy is calculated, measuring within-slice variability. System uncertainty is then expressed by the mean entropy of all time slices. We define system complexity as “uncertainty about uncertainty” and express it by the entropy of the entropies of all time slices. Calculating and plotting uncertainty “u” and complexity “c” for many different numbers of time slices yields the c-u-curve. Systems can be analysed, compared and classified by the c-u-curve in terms of (i) its overall shape, (ii) mean and maximum uncertainty, (iii) mean and maximum complexity and (iv) characteristic timescale expressed by the width of the time slice for which maximum complexity occurs. We demonstrate the method with the example of both synthetic and real-world time series (constant, random noise, Lorenz attractor, precipitation and streamflow) and show that the shape and properties of the respective c-u-curve clearly reflect the particular characteristics of each time series. For the hydrological time series, we also show that the c-u-curve characteristics are in accordance with hydrological system understanding. We conclude that the c-u-curve method can be used to analyse, classify and compare dynamical systems. In particular, it can be used to classify hydrological systems into similar groups, a pre-condition for regionalization, and it can be used as a diagnostic measure and as an objective function in hydrological model calibration. Distinctive features of the method are (i) that it is based on unit-free probabilities, thus permitting application to any kind of data, (ii) that it is bounded, (iii) that it naturally expands from single-variate to multivariate systems, and (iv) that it is applicable to both deterministic and probabilistic value representations, permitting e.g. application to ensemble model predictions

    HESS Opinions "Should we apply bias correction to global and regional climate model data?"

    Get PDF
    Despite considerable progress in recent years, output of both global and regional circulation models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem, bias correction (BC; i.e. the correction of model output towards observations in a post-processing step) has now become a standard procedure in climate change impact studies. In this paper we argue that BC is currently often used in an invalid way: it is added to the GCM/RCM model chain without sufficient proof that the consistency of the latter (i.e. the agreement between model dynamics/model output and our judgement) as well as the generality of its applicability increases. BC methods often impair the advantages of circulation models by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Currently used BC methods largely neglect feedback mechanisms, and it is unclear whether they are time-invariant under climate change conditions. Applying BC increases agreement of climate model output with observations in hindcasts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user.We argue that this hides rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of circulation models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future global and regional circulation model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies

    Adaptive clustering: reducing the computational costs of distributed (hydrological) modelling by exploiting time-variable similarity among model elements

    Get PDF
    In this paper we propose adaptive clustering as a new method for reducing the computational efforts of distributed modelling. It consists of identifying similar-acting model elements during runtime, clustering them, running the model for just a few representatives per cluster, and mapping their results to the remaining model elements in the cluster. Key requirements for the application of adaptive clustering are the existence of (i) many model elements with (ii) comparable structural and functional properties and (iii) only weak interaction (e.g. hill slopes, subcatchments, or surface grid elements in hydrological and land surface models). The clustering of model elements must not only consider their time-invariant structural and functional properties but also their current state and forcing, as all these aspects influence their current functioning. Joining model elements into clusters is therefore a continuous task during model execution rather than a one-time exercise that can be done beforehand. Adaptive clustering takes this into account by continuously checking the clustering and re-clustering when necessary. We explain the steps of adaptive clustering and provide a proof of concept at the example of a distributed, conceptual hydrological model fit to the Attert basin in Luxembourg. The clustering is done based on normalised and binned transformations of model element states and fluxes. Analysing a 5-year time series of these transformed states and fluxes revealed that many model elements act very similarly, and the degree of similarity varies strongly with time, indicating the potential for adaptive clustering to save computation time. Compared to a standard, full-resolution model run used as a virtual reality “truth”, adaptive clustering indeed reduced computation time by 75 %, while modelling quality, expressed as the Nash–Sutcliffe efficiency of subcatchment runoff, declined from 1 to 0.84. Based on this proof-of-concept application, we believe that adaptive clustering is a promising tool for reducing the computation time of distributed models. Being adaptive, it integrates and enhances existing methods of static grouping of model elements, such as lumping or grouped response units (GRUs). It is compatible with existing dynamical methods such as adaptive time stepping or adaptive gridding and, unlike the latter, does not require adjacency of the model elements to be joined. As a welcome side effect, adaptive clustering can be used for system analysis; in our case, analysing the space–time patterns of clustered model elements confirmed that the hydrological functioning of the Attert catchment is mainly controlled by the spatial patterns of geology and precipitation

    Disentangling timing and amplitude errors in streamflow simulations

    Get PDF
    This article introduces an improvement in the Series Distance (SD) approach for the improved discrimination and visualization of timing and magnitude uncertainties in streamflow simulations. SD emulates visual hydrograph comparison by distinguishing periods of low flow and periods of rise and recession in hydrological events. Within these periods, it determines the distance of two hydrographs not between points of equal time but between points that are hydrologically similar. The improvement comprises an automated procedure to emulate visual pattern matching, i.e. the determination of an optimal level of generalization when comparing two hydrographs, a scaled error model which is better applicable across large discharge ranges than its non-scaled counterpart, and "error dressing", a concept to construct uncertainty ranges around deterministic simulations or forecasts. Error dressing includes an approach to sample empirical error distributions by increasing variance contribution, which can be extended from standard one-dimensional distributions to the two-dimensional distributions of combined time and magnitude errors provided by SD. In a case study we apply both the SD concept and a benchmark model (BM) based on standard magnitude errors to a 6-year time series of observations and simulations from a small alpine catchment. Time–magnitude error characteristics for low flow and rising and falling limbs of events were substantially different. Their separate treatment within SD therefore preserves useful information which can be used for differentiated model diagnostics, and which is not contained in standard criteria like the Nash–Sutcliffe efficiency. Construction of uncertainty ranges based on the magnitude of errors of the BM approach and the combined time and magnitude errors of the SD approach revealed that the BM-derived ranges were visually narrower and statistically superior to the SD ranges. This suggests that the combined use of time and magnitude errors to construct uncertainty envelopes implies a trade-off between the added value of explicitly considering timing errors and the associated, inevitable time-spreading effect which inflates the related uncertainty ranges. Which effect dominates depends on the characteristics of timing errors in the hydrographs at hand. Our findings confirm that Series Distance is an elaborated concept for the comparison of simulated and observed streamflow time series which can be used for detailed hydrological analysis and model diagnostics and to inform us about uncertainties related to hydrological predictions

    Technical note: “Bit by bit”: a practical and general approach for evaluating model computational complexity vs. model performance

    Get PDF
    One of the main objectives of the scientific enterprise is the development of well-performing yet parsimonious models for all natural phenomena and systems. In the 21st century, scientists usually represent their models, hypotheses, and experimental observations using digital computers. Measuring performance and parsimony of computer models is therefore a key theoretical and practical challenge for 21st century science. “Performance” here refers to a model’s ability to reduce predictive uncertainty about an object of interest. “Parsimony” (or complexity) comprises two aspects: descriptive complexity – the size of the model itself which can be measured by the disk space it occupies – and computational complexity – the model’s effort to provide output. Descriptive complexity is related to inference quality and generality; computational complexity is often a practical and economic concern for limited computing resources. In this context, this paper has two distinct but related goals. The first is to propose a practical method of measuring computational complexity by utility software “Strace”, which counts the total number of memory visits while running a model on a computer. The second goal is to propose the “bit by bit” method, which combines measuring computational complexity by “Strace” and measuring model performance by information loss relative to observations, both in bit. For demonstration, we apply the “bit by bit” method to watershed models representing a wide diversity of modelling strategies (artificial neural network, auto-regressive, processbased, and others). We demonstrate that computational complexity as measured by “Strace” is sensitive to all aspects of a model, such as the size of the model itself, the input data it reads, its numerical scheme, and time stepping. We further demonstrate that for each model, the bit counts for computational complexity exceed those for performance by several orders of magnitude and that the differences among the models for both computational complexity and performance can be explained by their setup and are in accordance with expectations. We conclude that measuring computational complexity by “Strace” is practical, and it is also general in the sense that it can be applied to any model that can be run on a digital computer. We further conclude that the “bit by bit” approach is general in the sense that it measures two key aspects of a model in the single unit of bit. We suggest that it can be enhanced by additionally measuring a model’s descriptive complexity – also in bit.info:eu-repo/semantics/publishedVersio

    Technical note: “Bit by bit”: a practical and general approach for evaluating model computational complexity vs. model performance

    Get PDF
    One of the main objectives of the scientific enterprise is the development of well-performing yet parsimonious models for all natural phenomena and systems. In the 21st century, scientists usually represent their models, hypotheses, and experimental observations using digital computers. Measuring performance and parsimony of computer models is therefore a key theoretical and practical challenge for 21st century science. “Performance” here refers to a model\u27s ability to reduce predictive uncertainty about an object of interest. “Parsimony” (or complexity) comprises two aspects: descriptive complexity – the size of the model itself which can be measured by the disk space it occupies – and computational complexity – the model\u27s effort to provide output. Descriptive complexity is related to inference quality and generality; computational complexity is often a practical and economic concern for limited computing resources. In this context, this paper has two distinct but related goals. The first is to propose a practical method of measuring computational complexity by utility software “Strace”, which counts the total number of memory visits while running a model on a computer. The second goal is to propose the “bit by bit” method, which combines measuring computational complexity by “Strace” and measuring model performance by information loss relative to observations, both in bit. For demonstration, we apply the “bit by bit” method to watershed models representing a wide diversity of modelling strategies (artificial neural network, auto-regressive, process-based, and others). We demonstrate that computational complexity as measured by “Strace” is sensitive to all aspects of a model, such as the size of the model itself, the input data it reads, its numerical scheme, and time stepping. We further demonstrate that for each model, the bit counts for computational complexity exceed those for performance by several orders of magnitude and that the differences among the models for both computational complexity and performance can be explained by their setup and are in accordance with expectations. We conclude that measuring computational complexity by “Strace” is practical, and it is also general in the sense that it can be applied to any model that can be run on a digital computer. We further conclude that the “bit by bit” approach is general in the sense that it measures two key aspects of a model in the single unit of bit. We suggest that it can be enhanced by additionally measuring a model\u27s descriptive complexity – also in bit

    On the dynamic nature of hydrological similarity

    Get PDF
    The increasing diversity and resolution of spatially distributed data on terrestrial systems greatly enhance the potential of hydrological modeling. Optimal and parsimonious use of these data sources requires, however, that we better understand (a) which system characteristics exert primary controls on hydrological dynamics and (b) to what level of detail do those characteristics need to be represented in a model. In this study we develop and test an approach to explore these questions that draws upon information theoretic and thermodynamic reasoning, using spatially distributed topographic information as a straightforward example. Specifically, we subdivide a mesoscale catchment into 105 hillslopes and represent each by a two-dimensional numerical hillslope model. These hillslope models differ exclusively with respect to topography-related parameters derived from a digital elevation model (DEM); the remaining setup and meteorological forcing for each are identical. We analyze the degree of similarity of simulated discharge and storage among the hillslopes as a function of time by examining the Shannon information entropy. We furthermore derive a “compressed” catchment model by clustering the hillslope models into functional groups of similar runoff generation using normalized mutual information (NMI) as a distance measure. Our results reveal that, within our given model environment, only a portion of the entire amount of topographic information stored within a digital elevation model is relevant for the simulation of distributed runoff and storage dynamics. This manifests through a possible compression of the model ensemble from the entire set of 105 hillslopes to only 6 hillslopes, each representing a different functional group, which leads to no substantial loss in model performance. Importantly, we find that the concept of hydrological similarity is not necessarily time invariant. On the contrary, the Shannon entropy as measure for diversity in the simulation ensemble shows a distinct annual pattern, with periods of highly redundant simulations, reflecting coherent and organized dynamics, and periods where hillslopes operate in distinctly different ways. We conclude that the proposed approach provides a powerful framework for understanding and diagnosing how and when process organization and functional similarity of hydrological systems emerge in time. Our approach is neither restricted to the model nor to model targets or the data source we selected in this study. Overall, we propose that the concepts of hydrological systems acting similarly (and thus giving rise to redundancy) or displaying unique functionality (and thus being irreplaceable) are not mutually exclusive. They are in fact of complementary nature, and systems operate by gradually changing to different levels of organization in time

    Histogram via entropy reduction (HER): an information-theoretic alternative for geostatistics

    Get PDF
    Interpolation of spatial data has been regarded in many different forms, varying from deterministic to stochastic, parametric to nonparametric, and purely data-driven to geostatistical methods. In this study, we propose a nonparametric interpolator, which combines information theory with probability aggregation methods in a geostatistical framework for the stochastic estimation of unsampled points. Histogram via entropy reduction (HER) predicts conditional distributions based on empirical probabilities, relaxing parameterizations and, therefore, avoiding the risk of adding information not present in data. By construction, it provides a proper framework for uncertainty estimation since it accounts for both spatial configuration and data values, while allowing one to introduce or infer properties of the field through the aggregation method. We investigate the framework using synthetically generated data sets and demonstrate its efficacy in ascertaining the underlying field with varying sample densities and data properties. HER shows a comparable performance to popular benchmark models, with the additional advantage of higher generality. The novel method brings a new perspective of spatial interpolation and uncertainty analysis to geostatistics and statistical learning, using the lens of information theory

    Quantitative precipitation estimation based on highresolution numerical weather prediction and data assimilation with WRF - a performance test

    Get PDF
    Quantitative precipitation estimation and forecasting (QPE and QPF) are among the most challenging tasks in atmospheric sciences. In this work, QPE based on numerical modelling and data assimilation is investigated. Key components are the Weather Research and Forecasting (WRF) model in combination with its 3D variational assimilation scheme, applied on the convection-permitting scale with sophisticated model physics over central Europe. The system is operated in a 1-hour rapid update cycle and processes a large set of in situ observations, data from French radar systems, the European GPS network and satellite sensors. Additionally, a free forecast driven by the ECMWF operational analysis is included as a reference run representing current operational precipitation forecasting. The verification is done both qualitatively and quantitatively by comparisons of reflectivity, accumulated precipitation fields and derived verification scores for a complex synoptic situation that developed on 26 and 27 September 2012. The investigation shows that even the downscaling from ECMWF represents the synoptic situation reasonably well. However, significant improvements are seen in the results of the WRF QPE setup, especially when the French radar data are assimilated. The frontal structure is more defined and the timing of the frontal movement is improved compared with observations. Even mesoscale bandlike precipitation structures on the rear side of the cold front are reproduced, as seen by radar. The improvement in performance is also confirmed by a quantitative comparison of the 24-hourly accumulated precipitation over Germany. The mean correlation of the model simulations with observations improved from 0.2 in the downscaling experiment and 0.29 in the assimilation experiment without radar data to 0.56 in the WRF QPE experiment including the assimilation of French radar data
    • …
    corecore