12,503 research outputs found
Robust Estimators in Generalized Pareto Models
This paper deals with optimally-robust parameter estimation in generalized
Pareto distributions (GPDs). These arise naturally in many situations where one
is interested in the behavior of extreme events as motivated by the
Pickands-Balkema-de Haan extreme value theorem (PBHT). The application we have
in mind is calculation of the regulatory capital required by Basel II for a
bank to cover operational risk. In this context the tail behavior of the
underlying distribution is crucial. This is where extreme value theory enters,
suggesting to estimate these high quantiles parameterically using, e.g. GPDs.
Robust statistics in this context offers procedures bounding the influence of
single observations, so provides reliable inference in the presence of moderate
deviations from the distributional model assumptions, respectively from the
mechanisms underlying the PBHT.Comment: 26pages, 6 figure
Generalized decomposition and cross entropy methods for many-objective optimization
Decomposition-based algorithms for multi-objective
optimization problems have increased in popularity in the past decade. Although their convergence to the Pareto optimal front (PF) is in several instances superior to that of Pareto-based algorithms, the problem of selecting a way to distribute or guide these solutions in a high-dimensional space has not been explored. In this work, we introduce a novel concept which we call generalized
decomposition. Generalized decomposition provides a framework with which the decision maker (DM) can guide the underlying evolutionary algorithm toward specific regions of interest or the entire Pareto front with the desired distribution of Pareto optimal solutions. Additionally, it is shown that generalized decomposition simplifies many-objective problems by unifying the three performance objectives of multi-objective evolutionary algorithms – convergence to the PF, evenly distributed Pareto
optimal solutions and coverage of the entire front – to only one, that of convergence. A framework, established on generalized decomposition, and an estimation of distribution algorithm (EDA) based on low-order statistics, namely the cross-entropy method (CE), is created to illustrate the benefits of the proposed concept for many objective problems. This choice of EDA also enables
the test of the hypothesis that low-order statistics based EDAs can have comparable performance to more elaborate EDAs
Distance between exact and approximate distributions of partial maxima under power normalization
We obtain the distance between the exact and approximate distributions of
partial maxima of a random sample under power normalization. It is observed
that the Hellinger distance and variational distance between the exact and
approximate distributions of partial maxima under power normalization is the
same as the corresponding distances under linear normalization.Comment: Published at http://dx.doi.org/10.15559/15-VMSTA42 in the Modern
Stochastics: Theory and Applications (https://www.i-journals.org/vtxpp/VMSTA)
by VTeX (http://www.vtex.lt/
Smooth tail index estimation
Both parametric distribution functions appearing in extreme value theory -
the generalized extreme value distribution and the generalized Pareto
distribution - have log-concave densities if the extreme value index gamma is
in [-1,0]. Replacing the order statistics in tail index estimators by their
corresponding quantiles from the distribution function that is based on the
estimated log-concave density leads to novel smooth quantile and tail index
estimators. These new estimators aim at estimating the tail index especially in
small samples. Acting as a smoother of the empirical distribution function, the
log-concave distribution function estimator reduces estimation variability to a
much greater extent than it introduces bias. As a consequence, Monte Carlo
simulations demonstrate that the smoothed version of the estimators are well
superior to their non-smoothed counterparts, in terms of mean squared error.Comment: 17 pages, 5 figures. Slightly changed Pickand's estimator, added some
more introduction and discussio
- …