114,507 research outputs found
Investigating prior parameter distributions in the inverse modelling of water distribution hydraulic models
PublishedJournal Article© 2014 Journal of Mechanical Engineering. All rights reserved. Inverse modelling concentrates on estimating water distribution system (WDS) model parameters that are not directly measurable, e.g. pipe roughness coefficients, which can, therefore, only be estimated by indirect approaches, i.e. inverse modelling. Estimation of the parameter and predictive uncertainty of WDS models is an essential part of the inverse modelling process. Recently, Markov Chain Monte Carlo (MCMC) simulations have gained in popularity in uncertainty analyses due to their effective and efficient exploration of posterior parameter probability density functions (pdf). A Bayesian framework is used to infer prior parameter information via a likelihood function to plausible ranges of posterior parameter pdf. Improved parameter and predictive uncertainty are achieved through the incorporation of prior pdf of parameter values and the use of a generalized likelihood function. We used three prior information sampling schemes to infer the pipe roughness coefficients of WDS models. A hypothetical case study and a real-world WDS case study were used to illustrate the strengths and weaknesses of a particular selection of a prior information pdf. The results obtained show that the level of parameter identifiability (i.e. sensitivity) is an important property for prior pdf selection.We are obliged to Jasper A. Vrugt and Cajo ter Braak
for providing the code of the DREAM(ZS) algorithm
and graphical post-processing software
Toward improved identifiability of hydrologic model parameters: The information content of experimental data
We have developed a sequential optimization methodology, entitled the parameter identification method based on the localization of information (PIMLI) that increases information retrieval from the data by inferring the location and type of measurements that are most informative for the model parameters. The PIMLI approach merges the strengths of the generalized sensitivity analysis (GSA) method [Spear and Hornberger, 1980], the Bayesian recursive estimation (BARE) algorithm [Thiemann et al., 2001], and the Metropolis algorithm [Metropolis et al., 1953]. Three case studies with increasing complexity are used to illustrate the usefulness and applicability of the PIMLI methodology. The first two case studies consider the identification of soil hydraulic parameters using soil water retention data and a transient multistep outflow experiment (MSO), whereas the third study involves the calibration of a conceptual rainfall-runoff model
Data assimilation in slow-fast systems using homogenized climate models
A deterministic multiscale toy model is studied in which a chaotic fast
subsystem triggers rare transitions between slow regimes, akin to weather or
climate regimes. Using homogenization techniques, a reduced stochastic
parametrization model is derived for the slow dynamics. The reliability of this
reduced climate model in reproducing the statistics of the slow dynamics of the
full deterministic model for finite values of the time scale separation is
numerically established. The statistics however is sensitive to uncertainties
in the parameters of the stochastic model. It is investigated whether the
stochastic climate model can be beneficial as a forecast model in an ensemble
data assimilation setting, in particular in the realistic setting when
observations are only available for the slow variables. The main result is that
reduced stochastic models can indeed improve the analysis skill, when used as
forecast models instead of the perfect full deterministic model. The stochastic
climate model is far superior at detecting transitions between regimes. The
observation intervals for which skill improvement can be obtained are related
to the characteristic time scales involved. The reason why stochastic climate
models are capable of producing superior skill in an ensemble setting is due to
the finite ensemble size; ensembles obtained from the perfect deterministic
forecast model lacks sufficient spread even for moderate ensemble sizes.
Stochastic climate models provide a natural way to provide sufficient ensemble
spread to detect transitions between regimes. This is corroborated with
numerical simulations. The conclusion is that stochastic parametrizations are
attractive for data assimilation despite their sensitivity to uncertainties in
the parameters.Comment: Accepted for publication in Journal of the Atmospheric Science
Flexible Tweedie regression models for continuous data
Tweedie regression models provide a flexible family of distributions to deal
with non-negative highly right-skewed data as well as symmetric and heavy
tailed data and can handle continuous data with probability mass at zero. The
estimation and inference of Tweedie regression models based on the maximum
likelihood method are challenged by the presence of an infinity sum in the
probability function and non-trivial restrictions on the power parameter space.
In this paper, we propose two approaches for fitting Tweedie regression models,
namely, quasi- and pseudo-likelihood. We discuss the asymptotic properties of
the two approaches and perform simulation studies to compare our methods with
the maximum likelihood method. In particular, we show that the quasi-likelihood
method provides asymptotically efficient estimation for regression parameters.
The computational implementation of the alternative methods is faster and
easier than the orthodox maximum likelihood, relying on a simple Newton scoring
algorithm. Simulation studies showed that the quasi- and pseudo-likelihood
approaches present estimates, standard errors and coverage rates similar to the
maximum likelihood method. Furthermore, the second-moment assumptions required
by the quasi- and pseudo-likelihood methods enables us to extend the Tweedie
regression models to the class of quasi-Tweedie regression models in the
Wedderburn's style. Moreover, it allows to eliminate the non-trivial
restriction on the power parameter space, and thus provides a flexible
regression model to deal with continuous data. We provide \texttt{R}
implementation and illustrate the application of Tweedie regression models
using three data sets.Comment: 34 pages, 8 figure
Contamination of stellar-kinematic samples and uncertainty about dark matter annihilation profiles in ultrafaint dwarf galaxies: the example of Segue I
The expected gamma-ray flux coming from dark matter annihilation in dwarf
spheroidal (dSph) galaxies depends on the so-called `J-factor', the integral of
the squared dark matter density along the line-of-sight. We examine the degree
to which estimates of J are sensitive to contamination (by foreground Milky Way
stars and stellar streams) of the stellar-kinematic samples that are used to
infer dark matter densities in `ultrafaint' dSphs. Applying standard kinematic
analyses to hundreds of mock data sets that include varying levels of
contamination, we find that mis-classified contaminants can cause J-factors to
be overestimated by orders of magnitude. Stellar-kinematic data sets for which
we obtain such biased estimates tend 1) to include relatively large fractions
of stars with ambiguous membership status, and 2) to give estimates for J that
are sensitive to specific choices about how to weight and/or to exclude stars
with ambiguous status. Comparing publicly-available stellar-kinematic samples
for the nearby dSphs Reticulum~II and Segue~I, we find that only the latter
displays both of these characteristics. Estimates of Segue~I's J-factor should
therefore be regarded with a larger degree of caution when planning and
interpreting gamma-ray observations. Moreover, robust interpretations regarding
dark matter annihilation in dSph galaxies in general will require explicit
examination of how interlopers might affect the inferred dark matter density
profile.Comment: 12 pages, 8 figures. New appendix A (joint light/dark matter
likelihood), results unchanged. Match accepted MNRAS versio
- …