4,066 research outputs found

    Smooth tail index estimation

    Full text link
    Both parametric distribution functions appearing in extreme value theory - the generalized extreme value distribution and the generalized Pareto distribution - have log-concave densities if the extreme value index gamma is in [-1,0]. Replacing the order statistics in tail index estimators by their corresponding quantiles from the distribution function that is based on the estimated log-concave density leads to novel smooth quantile and tail index estimators. These new estimators aim at estimating the tail index especially in small samples. Acting as a smoother of the empirical distribution function, the log-concave distribution function estimator reduces estimation variability to a much greater extent than it introduces bias. As a consequence, Monte Carlo simulations demonstrate that the smoothed version of the estimators are well superior to their non-smoothed counterparts, in terms of mean squared error.Comment: 17 pages, 5 figures. Slightly changed Pickand's estimator, added some more introduction and discussio

    Tail index estimation, concentration and adaptivity

    Get PDF
    This paper presents an adaptive version of the Hill estimator based on Lespki's model selection method. This simple data-driven index selection method is shown to satisfy an oracle inequality and is checked to achieve the lower bound recently derived by Carpentier and Kim. In order to establish the oracle inequality, we derive non-asymptotic variance bounds and concentration inequalities for Hill estimators. These concentration inequalities are derived from Talagrand's concentration inequality for smooth functions of independent exponentially distributed random variables combined with three tools of Extreme Value Theory: the quantile transform, Karamata's representation of slowly varying functions, and R\'enyi's characterisation of the order statistics of exponential samples. The performance of this computationally and conceptually simple method is illustrated using Monte-Carlo simulations

    On Tail Index Estimation based on Multivariate Data

    Full text link
    This article is devoted to the study of tail index estimation based on i.i.d. multivariate observations, drawn from a standard heavy-tailed distribution, i.e. of which 1-d Pareto-like marginals share the same tail index. A multivariate Central Limit Theorem for a random vector, whose components correspond to (possibly dependent) Hill estimators of the common shape index alpha, is established under mild conditions. Motivated by the statistical analysis of extremal spatial data in particular, we introduce the concept of (standard) heavy-tailed random field of tail index alpha and show how this limit result can be used in order to build an estimator of alpha with small asymptotic mean squared error, through a proper convex linear combination of the coordinates. Beyond asymptotic results, simulation experiments illustrating the relevance of the approach promoted are also presented

    Selection index estimation from partial multivariate normal data

    Get PDF
    Selection index estimation from partial multivariate normal dat

    Semiparametric Lower Bounds for Tail Index Estimation

    Get PDF
    indexation;semiparametric estimation

    Modulation-Index Estimation in a Combined CPM/OFDM Receiver

    Get PDF
    In this paper we develop a blind modulation-index estimator for\ud a combined CPM/OFDMReceiver. The performance of the estimator\ud in an AWGN channel is assessed by simulation and analysis\ud and its suitability for our receiver is established

    High-Frequency Tail Index Estimation by Nearly Tight Frames

    Full text link
    This work develops the asymptotic properties (weak consistency and Gaussianity), in the high-frequency limit, of approximate maximum likelihood estimators for the spectral parameters of Gaussian and isotropic spherical random fields. The procedure we used exploits the so-called mexican needlet construction by Geller and Mayeli in [Geller, Mayeli (2009)]. Furthermore, we propose a plug-in procedure to optimize the precision of the estimators in terms of asymptotic variance.Comment: 38 page

    Multifidelity variance reduction for pick-freeze Sobol index estimation

    Get PDF
    Many mathematical models involve input parameters, which are not precisely known. Global sensitivity analysis aims to identify the parameters whose uncertainty has the largest impact on the variability of a quantity of interest (output of the model). One of the statistical tools used to quantify the influence of each input variable on the output is the Sobol sensitivity index, which can be estimated using a large sample of evaluations of the output. We propose a variance reduction technique, based on the availability of a fast approximation of the output, which can enable significant computational savings when the output is costly to evaluate
    • …
    corecore