72,091 research outputs found
Uniform fractional factorial designs
The minimum aberration criterion has been frequently used in the selection of
fractional factorial designs with nominal factors. For designs with
quantitative factors, however, level permutation of factors could alter their
geometrical structures and statistical properties. In this paper uniformity is
used to further distinguish fractional factorial designs, besides the minimum
aberration criterion. We show that minimum aberration designs have low
discrepancies on average. An efficient method for constructing uniform minimum
aberration designs is proposed and optimal designs with 27 and 81 runs are
obtained for practical use. These designs have good uniformity and are
effective for studying quantitative factors.Comment: Published in at http://dx.doi.org/10.1214/12-AOS987 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
The robustness of the derived design life levels of heavy precipitation events in the pre-alpine Oberland region of Southern Germany
Extreme value analysis (EVA) is well-established to derive hydrometeorological design values for infrastructures that have to withstand extreme events. Since there is concern about increased extremes with higher hazard potential under climate change, alterations of EVA are introduced for which statistical properties are no longer assumed to be constant but vary over time. In this study, both stationary and non-stationary EVA models are used to derive design life levels (DLLs) of daily precipitation in the pre-alpine Oberland region of Southern Germany, an orographically complex region characterized by heavy precipitation events and climate change. As EVA is fraught with uncertainties, it is crucial to quantify its methodological impacts: two theoretical distributions (i.e., Generalized Extreme Value (GEV) and Generalized Pareto (GP) distribution), four different parameter estimation techniques (i.e., Maximum Likelihood Estimation (MLE), L-moments, Generalized Maximum Likelihood Estimation (GMLE), and Bayesian estimation method) are evaluated and compared. The study reveals large methodological uncertainties. Discrepancies due to the parameter estimation methods may reach up to 45% of the mean absolute value, while differences between stationary and non-stationary models are of the same magnitude (differences in DLLs up to 40%). For the end of this century in the Oberland region, there is no robust tendency towards increased extremes found
The Robustness of the Derived Design Life Levels of Heavy Precipitation Events in the Pre-Alpine Oberland Region of Southern Germany
Extreme value analysis (EVA) is well-established to derive hydrometeorological design values for infrastructures that have to withstand extreme events. Since there is concern about increased extremes with higher hazard potential under climate change, alterations of EVA are introduced for which statistical properties are no longer assumed to be constant but vary over time. In this study, both stationary and non-stationary EVA models are used to derive design life levels (DLLs) of daily precipitation in the pre-alpine Oberland region of Southern Germany, an orographically complex region characterized by heavy precipitation events and climate change. As EVA is fraught with uncertainties, it is crucial to quantify its methodological impacts: two theoretical distributions (i.e., Generalized Extreme Value (GEV) and Generalized Pareto (GP) distribution), four different parameter estimation techniques (i.e., Maximum Likelihood Estimation (MLE), L-moments, Generalized Maximum Likelihood Estimation (GMLE), and Bayesian estimation method) are evaluated and compared. The study reveals large methodological uncertainties. Discrepancies due to the parameter estimation methods may reach up to 45% of the mean absolute value, while differences between stationary and non-stationary models are of the same magnitude (differences in DLLs up to 40%). For the end of this century in the Oberland region, there is no robust tendency towards increased extremes found
A physically meaningful method for the comparison of potential energy functions
In the study of the conformational behavior of complex systems, such as
proteins, several related statistical measures are commonly used to compare two
different potential energy functions. Among them, the Pearson's correlation
coefficient r has no units and allows only semi-quantitative statements to be
made. Those that do have units of energy and whose value may be compared to a
physically relevant scale, such as the root mean square deviation (RMSD), the
mean error of the energies (ER), the standard deviation of the error (SDER) or
the mean absolute error (AER), overestimate the distance between potentials.
Moreover, their precise statistical meaning is far from clear. In this article,
a new measure of the distance between potential energy functions is defined
which overcomes the aforementioned difficulties. In addition, its precise
physical meaning is discussed, the important issue of its additivity is
investigated and some possible applications are proposed. Finally, two of these
applications are illustrated with practical examples: the study of the van der
Waals energy, as implemented in CHARMM, in the Trp-Cage protein (PDB code 1L2Y)
and the comparison of different levels of the theory in the ab initio study of
the Ramachandran map of the model peptide HCO-L-Ala-NH2.Comment: 30 pages, 7 figures, LaTeX, BibTeX. v2: A misspelling in the author's
name has been corrected. v3: A new application of the method has been added
at the end of section 9 and minor modifications have also been made in other
sections. v4: Journal reference and minor corrections adde
Towards gigantic RVE sizes for 3D stochastic fibrous networks
The size of representative volume element (RVE) for 3D stochastic fibrous media is investigated. A statistical RVE size determination method is applied to a specific model of random microstructure: Poisson fibers. The definition of RVE size is related to the concept of integral range. What happens in microstructures exhibiting an infinite integral range? Computational homogenization for thermal and elastic properties is performed through finite elements, over hundreds of realizations of the stochastic microstructural model, using uniform and mixed boundary conditions. The generated data undergoes statistical treatment, from which gigantic RVE sizes emerge. The method used for determining RVE sizes was found to be operational, even for pathological media, i.e., with infinite integral range, interconnected percolating porous phase and infinite contrast of propertie
A single scaling parameter as a first approximation to describe the rainfall pattern of a place: application on Catalonia
As well as in other natural processes, it has been frequently observed that the phenomenon arising from the rainfall generation process presents fractal self-similarity of statistical type, and thus, rainfall series generally show scaling properties. Based on this fact, there is a methodology, simple scaling, which is used quite broadly to find or reproduce the intensity–duration–frequency curves of a place. In the present work, the relationship of the simple scaling parameter with the characteristic rainfall pattern of the area of study has been investigated. The calculation of this scaling parameter has been performed from 147 daily rainfall selected series covering the temporal period between 1883 and 2016 over the Catalonian territory (Spain) and its nearby surroundings, and a discussion about the relationship between the scaling parameter spatial distribution and rainfall pattern, as well as about trends of this scaling parameter over the past decades possibly due to climate change, has been presented.Peer ReviewedPostprint (author's final draft
Nucleon Structure from Lattice QCD Using a Nearly Physical Pion Mass
We report the first Lattice QCD calculation using the almost physical pion
mass mpi=149 MeV that agrees with experiment for four fundamental isovector
observables characterizing the gross structure of the nucleon: the Dirac and
Pauli radii, the magnetic moment, and the quark momentum fraction. The key to
this success is the combination of using a nearly physical pion mass and
excluding the contributions of excited states. An analogous calculation of the
nucleon axial charge governing beta decay has inconsistencies indicating a
source of bias at low pion masses not present for the other observables and
yields a result that disagrees with experiment.Comment: journal version; 15 pages, 6 figure
Information-Theoretic Estimation of Preference Parameters: Macroeconomic Applications and Simulation Evidence
This paper investigates the behaviour of estimators based on the Kullback-Leibler information criterion (KLIC), as an alternative to the generalized method of moments (GMM). We first study the estimators in a Monte Carlo simulation model of consumption growth with power utility. Then we compare KLIC and GMM estimators in macroeconomic applications, in which preference parameters are estimated with aggregate data. KLIC probability measures serve as useful diagnostics. In dependent data, tests of overidentifying restrictions in the KLIC framework have size properties comparable to those of the J-test in iterated GMM, but superior size-adjusted power.KLIC estimation, generalized method of moments, Monte Carlo
- …