143,838 research outputs found

    Statistical Inferences Using Large Estimated Covariances for Panel Data and Factor Models

    Full text link
    While most of the convergence results in the literature on high dimensional covariance matrix are concerned about the accuracy of estimating the covariance matrix (and precision matrix), relatively less is known about the effect of estimating large covariances on statistical inferences. We study two important models: factor analysis and panel data model with interactive effects, and focus on the statistical inference and estimation efficiency of structural parameters based on large covariance estimators. For efficient estimation, both models call for a weighted principle components (WPC), which relies on a high dimensional weight matrix. This paper derives an efficient and feasible WPC using the covariance matrix estimator of Fan et al. (2013). However, we demonstrate that existing results on large covariance estimation based on absolute convergence are not suitable for statistical inferences of the structural parameters. What is needed is some weighted consistency and the associated rate of convergence, which are obtained in this paper. Finally, the proposed method is applied to the US divorce rate data. We find that the efficient WPC identifies the significant effects of divorce-law reforms on the divorce rate, and it provides more accurate estimation and tighter confidence intervals than existing methods

    Nonparametric estimation of time-varying covariance matrix in a slowly changing vector random walk model

    Get PDF
    A new multivariate random walk model with slowly changing parameters is introduced and investigated in detail. Nonparametric estimation of local covariance matrix is proposed. The asymptotic distributions, including asymptotic biases, variances and covariances of the proposed estimators are obtained. The properties of the estimated value of a weighted sum of individual nonparametric estimators are also studied in detail. The integrated effect of the estimation errors from the estimation for the difference series to the integrated processes is derived. Practical relevance of the model and estimation is illustrated by application to several foreign exchange rates.Multivariate time series; slowly changing vector random walk; local covariance matrix; kernel estimation; asymptotic properties; forecasting

    Covariance matrix estimation with heterogeneous samples

    Get PDF
    We consider the problem of estimating the covariance matrix Mp of an observation vector, using heterogeneous training samples, i.e., samples whose covariance matrices are not exactly Mp. More precisely, we assume that the training samples can be clustered into K groups, each one containing Lk, snapshots sharing the same covariance matrix Mk. Furthermore, a Bayesian approach is proposed in which the matrices Mk. are assumed to be random with some prior distribution. We consider two different assumptions for Mp. In a fully Bayesian framework, Mp is assumed to be random with a given prior distribution. Under this assumption, we derive the minimum mean-square error (MMSE) estimator of Mp which is implemented using a Gibbs-sampling strategy. Moreover, a simpler scheme based on a weighted sample covariance matrix (SCM) is also considered. The weights minimizing the mean square error (MSE) of the estimated covariance matrix are derived. Furthermore, we consider estimators based on colored or diagonal loading of the weighted SCM, and we determine theoretically the optimal level of loading. Finally, in order to relax the a priori assumptions about the covariance matrix Mp, the second part of the paper assumes that this matrix is deterministic and derives its maximum-likelihood estimator. Numerical simulations are presented to illustrate the performance of the different estimation schemes

    Improved estimation of the covariance matrix of stock returns with an application to portofolio selection

    Get PDF
    This paper proposes to estimate the covariance matrix of stock returns by an optimally weighted average of two existing estimators: the sample covariance matrix and single-index covariance matrix. This method is generally known as shrinkage, and it is standard in decision theory and in empirical Bayesian statistics. Our shrinkage estimator can be seen as a way to account for extra-market covariance without having to specify an arbitrary multi-factor structure. For NYSE and AMEX stock returns from 1972 to 1995, it can be used to select portfolios with significantly lower out-of-sample variance than a set of existing estimators, including multi-factor models.Covariance matrix estimation, factor models, portofolio selection, shrinkage

    Spectrally-Corrected and Regularized Global Minimum Variance Portfolio for Spiked Model

    Full text link
    Considering the shortcomings of the traditional sample covariance matrix estimation, this paper proposes an improved global minimum variance portfolio model and named spectral corrected and regularized global minimum variance portfolio (SCRGMVP), which is better than the traditional risk model. The key of this method is that under the assumption that the population covariance matrix follows the spiked model and the method combines the design idea of the sample spectrally-corrected covariance matrix and regularized. The simulation of real and synthetic data shows that our method is not only better than the performance of traditional sample covariance matrix estimation (SCME), shrinkage estimation (SHRE), weighted shrinkage estimation (WSHRE) and simple spectral correction estimation (SCE), but also has lower computational complexity

    Robust Covariance Adaptation in Adaptive Importance Sampling

    Full text link
    Importance sampling (IS) is a Monte Carlo methodology that allows for approximation of a target distribution using weighted samples generated from another proposal distribution. Adaptive importance sampling (AIS) implements an iterative version of IS which adapts the parameters of the proposal distribution in order to improve estimation of the target. While the adaptation of the location (mean) of the proposals has been largely studied, an important challenge of AIS relates to the difficulty of adapting the scale parameter (covariance matrix). In the case of weight degeneracy, adapting the covariance matrix using the empirical covariance results in a singular matrix, which leads to poor performance in subsequent iterations of the algorithm. In this paper, we propose a novel scheme which exploits recent advances in the IS literature to prevent the so-called weight degeneracy. The method efficiently adapts the covariance matrix of a population of proposal distributions and achieves a significant performance improvement in high-dimensional scenarios. We validate the new method through computer simulations

    Nonlinear shrinkage estimation of large-dimensional covariance matrices

    Full text link
    Many statistical applications require an estimate of a covariance matrix and/or its inverse. When the matrix dimension is large compared to the sample size, which happens frequently, the sample covariance matrix is known to perform poorly and may suffer from ill-conditioning. There already exists an extensive literature concerning improved estimators in such situations. In the absence of further knowledge about the structure of the true covariance matrix, the most successful approach so far, arguably, has been shrinkage estimation. Shrinking the sample covariance matrix to a multiple of the identity, by taking a weighted average of the two, turns out to be equivalent to linearly shrinking the sample eigenvalues to their grand mean, while retaining the sample eigenvectors. Our paper extends this approach by considering nonlinear transformations of the sample eigenvalues. We show how to construct an estimator that is asymptotically equivalent to an oracle estimator suggested in previous work. As demonstrated in extensive Monte Carlo simulations, the resulting bona fide estimator can result in sizeable improvements over the sample covariance matrix and also over linear shrinkage.Comment: Published in at http://dx.doi.org/10.1214/12-AOS989 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • 

    corecore