14,566 research outputs found

    Extreme value analysis for the sample autocovariance matrices of heavy-tailed multivariate time series

    Full text link
    We provide some asymptotic theory for the largest eigenvalues of a sample covariance matrix of a p-dimensional time series where the dimension p = p_n converges to infinity when the sample size n increases. We give a short overview of the literature on the topic both in the light- and heavy-tailed cases when the data have finite (infinite) fourth moment, respectively. Our main focus is on the heavytailed case. In this case, one has a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the iid case but also when rows and columns of the data are linearly dependent. We provide limit results for the weak convergence of these point processes to Poisson or cluster Poisson processes. Based on this convergence we can also derive the limit laws of various function als of the ordered eigenvalues such as the joint convergence of a finite number of the largest order statistics, the joint limit law of the largest eigenvalue and the trace, limit laws for successive ratios of ordered eigenvalues, etc. We also develop some limit theory for the singular values of the sample autocovariance matrices and their sums of squares. The theory is illustrated for simulated data and for the components of the S&P 500 stock index.Comment: in Extremes; Statistical Theory and Applications in Science, Engineering and Economics; ISSN 1386-1999; (2016

    Tail bounds for all eigenvalues of a sum of random matrices

    Get PDF
    This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace transform method developed in "User-friendly tail bounds for sums of random matrices" (arXiv:1004.4389v6) that yields both upper and lower bounds on each eigenvalue of a sum of random self-adjoint matrices. This machinery is used to derive eigenvalue analogues of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherence-like quantities. The second example addresses the question of relative accuracy in the estimation of eigenvalues of the covariance matrix of a random process. Standard results on the convergence of sample covariance matrices provide bounds on the number of samples needed to obtain relative accuracy in the spectral norm, but these results only guarantee relative accuracy in the estimate of the maximum eigenvalue. The minimax Laplace transform argument establishes that if the lowest eigenvalues decay sufficiently fast, on the order of (K^2*r*log(p))/eps^2 samples, where K is the condition number of an optimal rank-r approximation to C, are sufficient to ensure that the dominant r eigenvalues of the covariance matrix of a N(0, C) random vector are estimated to within a factor of 1+-eps with high probability.Comment: 20 pages, 1 figure, see also arXiv:1004.4389v

    Adaptive Thresholding for Sparse Covariance Matrix Estimation

    Get PDF
    In this paper we consider estimation of sparse covariance matrices and propose a thresholding procedure which is adaptive to the variability of individual entries. The estimators are fully data driven and enjoy excellent performance both theoretically and numerically. It is shown that the estimators adaptively achieve the optimal rate of convergence over a large class of sparse covariance matrices under the spectral norm. In contrast, the commonly used universal thresholding estimators are shown to be sub-optimal over the same parameter spaces. Support recovery is also discussed. The adaptive thresholding estimators are easy to implement. Numerical performance of the estimators is studied using both simulated and real data. Simulation results show that the adaptive thresholding estimators uniformly outperform the universal thresholding estimators. The method is also illustrated in an analysis on a dataset from a small round blue-cell tumors microarray experiment. A supplement to this paper which contains additional technical proofs is available online.Comment: To appear in Journal of the American Statistical Associatio

    Optimal Estimation and Rank Detection for Sparse Spiked Covariance Matrices

    Get PDF
    This paper considers sparse spiked covariance matrix models in the high-dimensional setting and studies the minimax estimation of the covariance matrix and the principal subspace as well as the minimax rank detection. The optimal rate of convergence for estimating the spiked covariance matrix under the spectral norm is established, which requires significantly different techniques from those for estimating other structured covariance matrices such as bandable or sparse covariance matrices. We also establish the minimax rate under the spectral norm for estimating the principal subspace, the primary object of interest in principal component analysis. In addition, the optimal rate for the rank detection boundary is obtained. This result also resolves the gap in a recent paper by Berthet and Rigollet [1] where the special case of rank one is considered
    • …
    corecore