31,932 research outputs found

    A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    Full text link
    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data

    Nonlinear shrinkage estimation of large-dimensional covariance matrices

    Full text link
    Many statistical applications require an estimate of a covariance matrix and/or its inverse. When the matrix dimension is large compared to the sample size, which happens frequently, the sample covariance matrix is known to perform poorly and may suffer from ill-conditioning. There already exists an extensive literature concerning improved estimators in such situations. In the absence of further knowledge about the structure of the true covariance matrix, the most successful approach so far, arguably, has been shrinkage estimation. Shrinking the sample covariance matrix to a multiple of the identity, by taking a weighted average of the two, turns out to be equivalent to linearly shrinking the sample eigenvalues to their grand mean, while retaining the sample eigenvectors. Our paper extends this approach by considering nonlinear transformations of the sample eigenvalues. We show how to construct an estimator that is asymptotically equivalent to an oracle estimator suggested in previous work. As demonstrated in extensive Monte Carlo simulations, the resulting bona fide estimator can result in sizeable improvements over the sample covariance matrix and also over linear shrinkage.Comment: Published in at http://dx.doi.org/10.1214/12-AOS989 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Regularized Block Toeplitz Covariance Matrix Estimation via Kronecker Product Expansions

    Full text link
    In this work we consider the estimation of spatio-temporal covariance matrices in the low sample non-Gaussian regime. We impose covariance structure in the form of a sum of Kronecker products decomposition (Tsiligkaridis et al. 2013, Greenewald et al. 2013) with diagonal correction (Greenewald et al.), which we refer to as DC-KronPCA, in the estimation of multiframe covariance matrices. This paper extends the approaches of (Tsiligkaridis et al.) in two directions. First, we modify the diagonally corrected method of (Greenewald et al.) to include a block Toeplitz constraint imposing temporal stationarity structure. Second, we improve the conditioning of the estimate in the very low sample regime by using Ledoit-Wolf type shrinkage regularization similar to (Chen, Hero et al. 2010). For improved robustness to heavy tailed distributions, we modify the KronPCA to incorporate robust shrinkage estimation (Chen, Hero et al. 2011). Results of numerical simulations establish benefits in terms of estimation MSE when compared to previous methods. Finally, we apply our methods to a real-world network spatio-temporal anomaly detection problem and achieve superior results.Comment: To appear at IEEE SSP 2014 4 page

    OptShrink: An algorithm for improved low-rank signal matrix denoising by optimal, data-driven singular value shrinkage

    Full text link
    The truncated singular value decomposition (SVD) of the measurement matrix is the optimal solution to the_representation_ problem of how to best approximate a noisy measurement matrix using a low-rank matrix. Here, we consider the (unobservable)_denoising_ problem of how to best approximate a low-rank signal matrix buried in noise by optimal (re)weighting of the singular vectors of the measurement matrix. We exploit recent results from random matrix theory to exactly characterize the large matrix limit of the optimal weighting coefficients and show that they can be computed directly from data for a large class of noise models that includes the i.i.d. Gaussian noise case. Our analysis brings into sharp focus the shrinkage-and-thresholding form of the optimal weights, the non-convex nature of the associated shrinkage function (on the singular values) and explains why matrix regularization via singular value thresholding with convex penalty functions (such as the nuclear norm) will always be suboptimal. We validate our theoretical predictions with numerical simulations, develop an implementable algorithm (OptShrink) that realizes the predicted performance gains and show how our methods can be used to improve estimation in the setting where the measured matrix has missing entries.Comment: Published version. The algorithm can be downloaded from http://www.eecs.umich.edu/~rajnrao/optshrin
    corecore