6,694 research outputs found

    Spatial-temporal rainfall simulation using generalized linear models

    Get PDF
    We consider the problem of simulating sequences of daily rainfall at a network of sites in such a way as to reproduce a variety of properties realistically over a range of spatial scales. The properties of interest will vary between applications but typically will include some measures of "extreme'' rainfall in addition to means, variances, proportions of wet days, and autocorrelation structure. Our approach is to fit a generalized linear model (GLM) to rain gauge data and, with appropriate incorporation of intersite dependence structure, to use the GLM to generate simulated sequences. We illustrate the methodology using a data set from southern England and show that the GLM is able to reproduce many properties at spatial scales ranging from a single site to 2000 km 2 ( the limit of the available data)

    On the extreme hydrologic events determinants by means of Beta-Singh-Maddala reparameterization

    Get PDF
    In previous studies, beta-k distribution and distribution functions strongly related to that, have played important roles in representing extreme events. Among these distributions, the Beta-Singh-Maddala turned out to be adequate for modelling hydrological extreme events. Starting from this distribution, the aim of the paper is to express the model as a function of indexes of hydrological interest and simultaneously investigate on their dependence with a set of explanatory variables in such a way to explore on possible determinants of extreme hydrologic events. Finally, an application to a real hydrologic dataset is considered in order to show the potentiality of the proposed model in describing data and in understanding effects of covariates on frequently adopted hydrological indicators

    Untenable nonstationarity: An assessment of the fitness for purpose of trend tests in hydrology

    Get PDF
    The detection and attribution of long-term patterns in hydrological time series have been important research topics for decades. A significant portion of the literature regards such patterns as ‘deterministic components’ or ‘trends’ even though the complexity of hydrological systems does not allow easy deterministic explanations and attributions. Consequently, trend estimation techniques have been developed to make and justify statements about tendencies in the historical data, which are often used to predict future events. Testing trend hypothesis on observed time series is widespread in the hydro-meteorological literature mainly due to the interest in detecting consequences of human activities on the hydrological cycle. This analysis usually relies on the application of some null hypothesis significance tests (NHSTs) for slowly-varying and/or abrupt changes, such as Mann-Kendall, Pettitt, or similar, to summary statistics of hydrological time series (e.g., annual averages, maxima, minima, etc.). However, the reliability of this application has seldom been explored in detail. This paper discusses misuse, misinterpretation, and logical flaws of NHST for trends in the analysis of hydrological data from three different points of view: historic-logical, semantic-epistemological, and practical. Based on a review of NHST rationale, and basic statistical definitions of stationarity, nonstationarity, and ergodicity, we show that even if the empirical estimation of trends in hydrological time series is always feasible from a numerical point of view, it is uninformative and does not allow the inference of nonstationarity without assuming a priori additional information on the underlying stochastic process, according to deductive reasoning. This prevents the use of trend NHST outcomes to support nonstationary frequency analysis and modeling. We also show that the correlation structures characterizing hydrological time series might easily be underestimated, further compromising the attempt to draw conclusions about trends spanning the period of records. Moreover, even though adjusting procedures accounting for correlation have been developed, some of them are insufficient or are applied only to some tests, while some others are theoretically flawed but still widely applied. In particular, using 250 unimpacted stream flow time series across the conterminous United States (CONUS), we show that the test results can dramatically change if the sequences of annual values are reproduced starting from daily stream flow records, whose larger sizes enable a more reliable assessment of the correlation structures

    Open TURNS: An industrial software for uncertainty quantification in simulation

    Full text link
    The needs to assess robust performances for complex systems and to answer tighter regulatory processes (security, safety, environmental control, and health impacts, etc.) have led to the emergence of a new industrial simulation challenge: to take uncertainties into account when dealing with complex numerical simulation frameworks. Therefore, a generic methodology has emerged from the joint effort of several industrial companies and academic institutions. EDF R&D, Airbus Group and Phimeca Engineering started a collaboration at the beginning of 2005, joined by IMACS in 2014, for the development of an Open Source software platform dedicated to uncertainty propagation by probabilistic methods, named OpenTURNS for Open source Treatment of Uncertainty, Risk 'N Statistics. OpenTURNS addresses the specific industrial challenges attached to uncertainties, which are transparency, genericity, modularity and multi-accessibility. This paper focuses on OpenTURNS and presents its main features: openTURNS is an open source software under the LGPL license, that presents itself as a C++ library and a Python TUI, and which works under Linux and Windows environment. All the methodological tools are described in the different sections of this paper: uncertainty quantification, uncertainty propagation, sensitivity analysis and metamodeling. A section also explains the generic wrappers way to link openTURNS to any external code. The paper illustrates as much as possible the methodological tools on an educational example that simulates the height of a river and compares it to the height of a dyke that protects industrial facilities. At last, it gives an overview of the main developments planned for the next few years

    Defining the hundred year flood: a Bayesian approach for using historic data to reduce uncertainty in flood frequency estimates

    Get PDF
    This paper describes a Bayesian statistical model for estimating flood frequency by combining uncertain annual maximum (AMAX) data from a river gauge with estimates of flood peak discharge from various historic sources that predate the period of instrument records. Such historic flood records promise to expand the time series data needed for reducing the uncertainty in return period estimates for extreme events, but the heterogeneity and uncertainty of historic records make them difficult to use alongside Flood Estimation Handbook and other standard methods for generating flood frequency curves from gauge data. Using the flow of the River Eden in Carlisle, Cumbria, UK as a case study, this paper develops a Bayesian model for combining historic flood estimates since 1800 with gauge data since 1967 to estimate the probability of low frequency flood events for the area taking account of uncertainty in the discharge estimates. Results show a reduction in 95% confidence intervals of roughly 50% for annual exceedance probabilities of less than 0.0133 (return periods over 75 years) compared to standard flood frequency estimation methods using solely systematic data. Sensitivity analysis shows the model is sensitive to 2 model parameters both of which are concerned with the historic (pre-systematic) period of the time series. This highlights the importance of adequate consideration of historic channel and floodplain changes or possible bias in estimates of historic flood discharges. The next steps required to roll out this Bayesian approach for operational flood frequency estimation at other sites is also discussed

    Newdistns: An R Package for New Families of Distributions

    Get PDF
    The contributed R package Newdistns written by the authors is introduced. This package computes the probability density function, cumulative distribution function, quantile function, random numbers and some measures of inference for nineteen families of distributions. Each family is flexible enough to encompass a large number of structures. The use of the package is illustrated using a real data set. Also robustness of random number generation is checked by simulation

    Extreme precipitation events in East Baton Rouge Parish: an areal rainfall frequency/magnitude analysis

    Get PDF
    Severe rainfall events are one of the most frequent weather hazards in the United States. These events are particularly problematic for the southeastern United States because of its subtropical climate. For these reasons, and because of the recent urban growth in the parish, East Baton Rouge Parish officials are concerned whether the current stormwater drainage system can keep pace with development. As a result, this project evaluated the rainfall frequency/magnitude for parish-wide extreme events and their synoptic forcing mechanisms. To this end, this research mapped parish-wide storms and compared three interpolation techniques. It also compared two methods of areal summation and five quantile estimation techniques. Results of cross-validation suggested kriging was the best interpolation technique for this research. Also, statistical testing showed that there were no significant differences between parish-wide rainfall totals calculated using gridded areal summation and contoured areal summation methods. Although the non-parametric SRCC method best fit the storm partial duration series, the parametric Beta-P was selected to produce quantile estimates. When areal design storms for East Baton Rouge Parish were compared to point rainfall totals for the parish from previous studies, areal totals were generally smaller. However, totals were larger for longer duration events (12- and 24-hour) at longer return intervals (50- and 100-year). This was attributed to differences in distributions used in quantile estimation and periods of record between the studies. This research included some large events (i.e., T.S. Allison II) that were not included in the two earlier studies. Results from the synoptic analysis showed that the frontal forcing mechanism dominated storms at all durations. Also, results showed that only the 3-hour duration included air-mass induced events, suggesting that these events were not generally a problem for larger areas, and were not significant in an areal analysis. Interannual variability showed that the years with the most events were associated with El Niño events, which increases precipitation in Louisiana, especially during winter. Also, most extreme events tend to occur in the month of April and are produced by fronts. In contrast, most extreme events resulting from tropical activity occurred in September
    corecore