20 research outputs found
Flood frequency analysis supported by the largest historical flood
The use of non-systematic flood data for statistical purposes depends
on the reliability of the assessment of both flood magnitudes and their return period.
The earliest known extreme flood year is usually the beginning of the
historical record. Even if one properly assesses the magnitudes of historic
floods, the problem of their return periods remains unsolved. The matter at
hand is that only the largest flood (XM) is known during whole historical
period and its occurrence marks the beginning of the historical period and
defines its length (<i>L</i>). It is common practice to use the earliest known flood
year as the beginning of the record. It means that the <i>L</i> value selected is an
empirical estimate of the lower bound on the effective historical
length <i>M</i>. The estimation of the return period of XM based on its occurrence
(<i>L</i>), i.e. <span style="position:relative; margin-left:-0.45m; top:-0.3em;">^</span><span style="position:relative; margin-left:-1.0em; top:0.3em;"><i>M</i></span> = <i>L</i>, gives a severe upward bias. The problem
arises that to estimate the time period (<i>M</i>) representative of the largest
observed flood XM.
<br><br>
From the discrete uniform distribution with support 1, 2, ... , <i>M</i> of the
probability of the <i>L</i> position of XM, one gets <span style="position: relative; margin-left: -0.45m; top: -0.3em;">^</span><span style="position:relative; margin-left:-1.0em; top:0.3em;"><i>L</i></span> = <i>M</i>/2. Therefore
<span style="position: relative; margin-left: -0.45m; top: -0.3em;">^</span><span style="position:relative; margin-left:-1.0em; top:0.3em;"><i>M</i></span> = 2<i>L</i> has been taken as the return period of XM and as the
effective historical record length as well this time. As in the systematic
period (<i>N</i>) all its elements are smaller than XM, one can get
<span style="position: relative; margin-left: -0.45m; top: -0.3em;">^</span><i><span style="position:relative; margin-left:-1.0em; top:0.3em;">M</i></span> = 2<i>t</i>( <i>L</i>+<i>N</i>).
<br><br>
The efficiency of using the largest historical flood (XM) for large
quantile estimation (i.e. one with return period <i>T</i> = 100 years)
has been assessed using the maximum likelihood (ML) method with various length of systematic
record (<i>N</i>) and various estimates of the historical period length
<span style="position: relative; margin-left: -0.45m; top: -0.3em;">^</span><span style="position:relative; margin-left:-1.0em; top:0.3em;"><i>M</i> </span> comparing accuracy with the case when systematic records
alone (<i>N</i>) are used only. The simulation procedure used for the purpose
incorporates <i>N</i> systematic record and the largest historic flood
(XM<sub><i>i</i></sub>) in the period <i>M</i>, which appeared in the <i>L</i><sub><i>i</i></sub> year of the historical period. The simulation results for
selected two-parameter distributions, values of their parameters, different
<i>N</i>
and <i>M</i> values are presented in terms of bias and root mean square error RMSEs of the quantile of
interest are more widely discussed
On accuracy of upper quantiles estimation
Flood frequency analysis (FFA) entails the estimation of the upper tail of a probability density function (PDF) of annual peak flows obtained from either the annual maximum series or partial duration series. In hydrological practice, the properties of various methods of upper quantiles estimation are identified with the case of known population distribution function. In reality, the assumed hypothetical model differs from the true one and one cannot assess the magnitude of error caused by model misspecification in respect to any estimated statistics. The opinion about the accuracy of the methods of upper quantiles estimation formed from the case of known population distribution function is upheld. The above-mentioned issue is the subject of the paper. The accuracy of large quantile assessments obtained from the four estimation methods is compared to two-parameter log-normal and log-Gumbel distributions and their three-parameter counterparts, i.e., three-parameter log-normal and GEV distributions. The cases of true and false hypothetical models are considered. The accuracy of flood quantile estimates depends on the sample size, the distribution type (both true and hypothetical), and strongly depends on the estimation method. In particular, the maximum likelihood method loses its advantageous properties in case of model misspecification
The stationarity paradigm revisited: Hypothesis testing using diagnostics, summary metrics, and DREAM (ABC)
Many watershed models used within the hydrologic research community assume (by default) stationary conditions, that is, the key watershed properties that control water flow are considered to be time invariant. This assumption is rather convenient and pragmatic and opens up the wide arsenal of (multivariate) statistical and nonlinear optimization methods for inference of the (temporally fixed) model parameters. Several contributions to the hydrologic literature have brought into question the continued usefulness of this stationary paradigm for hydrologic modeling. This paper builds on the likelihood-free diagnostics approach of Vrugt and Sadegh () and uses a diverse set of hydrologic summary metrics to test the stationary hypothesis and detect changes in the watersheds response to hydroclimatic forcing. Models with fixed parameter values cannot simulate adequately temporal variations in the summary statistics of the observed catchment data, and consequently, the DREAM(ABC) algorithm cannot find solutions that sufficiently honor the observed metrics. We demonstrate that the presented methodology is able to differentiate successfully between watersheds that are classified as stationary and those that have undergone significant changes in land use, urbanization, and/or hydroclimatic conditions, and thus are deemed nonstationary