7,108 research outputs found

    Extreme value copula estimation based on block maxima of a multivariate stationary time series

    Get PDF
    The core of the classical block maxima method consists of fitting an extreme value distribution to a sample of maxima over blocks extracted from an underlying series. In asymptotic theory, it is usually postulated that the block maxima are an independent random sample of an extreme value distribution. In practice however, block sizes are finite, so that the extreme value postulate will only hold approximately. A more accurate asymptotic framework is that of a triangular array of block maxima, the block size depending on the size of the underlying sample in such a way that both the block size and the number of blocks within that sample tend to infinity. The copula of the vector of componentwise maxima in a block is assumed to converge to a limit, which, under mild conditions, is then necessarily an extreme value copula. Under this setting and for absolutely regular stationary sequences, the empirical copula of the sample of vectors of block maxima is shown to be a consistent and asymptotically normal estimator for the limiting extreme value copula. Moreover, the empirical copula serves as a basis for rank-based, nonparametric estimation of the Pickands dependence function of the extreme value copula. The results are illustrated by theoretical examples and a Monte Carlo simulation study.Comment: 34 page

    On the occurrence times of componentwise maxima and bias in likelihood inference for multivariate max-stable distributions

    Get PDF
    Full likelihood-based inference for high-dimensional multivariate extreme value distributions, or max-stable processes, is feasible when incorporating occurrence times of the maxima; without this information, dd-dimensional likelihood inference is usually precluded due to the large number of terms in the likelihood. However, some studies have noted bias when performing high-dimensional inference that incorporates such event information, particularly when dependence is weak. We elucidate this phenomenon, showing that for unbiased inference in moderate dimensions, dimension dd should be of a magnitude smaller than the square root of the number of vectors over which one takes the componentwise maximum. A bias reduction technique is suggested and illustrated on the extreme value logistic model.Comment: 7 page
    corecore