326 research outputs found

    Calibration of shrinkage estimators for portfolio optimization

    Get PDF
    Shrinkage estimators is an area widely studied in statistics. In this paper, we contemplate the role of shrinkage estimators on the construction of the investor's portfolio. We study the performance of shrinking the sample moments to estimate portfolio weights as well as the performance of shrinking the naive sample portfolio weights themselves. We provide a theoretical and empirical analysis of different new methods to calibrate shrinkage estimators within portfolio optimizationPortfolio choice, Estimation error, Shrinkage estimators, Smoothed bootstrap

    Large Dimensional Analysis and Optimization of Robust Shrinkage Covariance Matrix Estimators

    Full text link
    This article studies two regularized robust estimators of scatter matrices proposed (and proved to be well defined) in parallel in (Chen et al., 2011) and (Pascal et al., 2013), based on Tyler's robust M-estimator (Tyler, 1987) and on Ledoit and Wolf's shrinkage covariance matrix estimator (Ledoit and Wolf, 2004). These hybrid estimators have the advantage of conveying (i) robustness to outliers or impulsive samples and (ii) small sample size adequacy to the classical sample covariance matrix estimator. We consider here the case of i.i.d. elliptical zero mean samples in the regime where both sample and population sizes are large. We demonstrate that, under this setting, the estimators under study asymptotically behave similar to well-understood random matrix models. This characterization allows us to derive optimal shrinkage strategies to estimate the population scatter matrix, improving significantly upon the empirical shrinkage method proposed in (Chen et al., 2011).Comment: Journal of Multivariate Analysi

    Covariance Estimation: The GLM and Regularization Perspectives

    Get PDF
    Finding an unconstrained and statistically interpretable reparameterization of a covariance matrix is still an open problem in statistics. Its solution is of central importance in covariance estimation, particularly in the recent high-dimensional data environment where enforcing the positive-definiteness constraint could be computationally expensive. We provide a survey of the progress made in modeling covariance matrices from two relatively complementary perspectives: (1) generalized linear models (GLM) or parsimony and use of covariates in low dimensions, and (2) regularization or sparsity for high-dimensional data. An emerging, unifying and powerful trend in both perspectives is that of reducing a covariance estimation problem to that of estimating a sequence of regression problems. We point out several instances of the regression-based formulation. A notable case is in sparse estimation of a precision matrix or a Gaussian graphical model leading to the fast graphical LASSO algorithm. Some advantages and limitations of the regression-based Cholesky decomposition relative to the classical spectral (eigenvalue) and variance-correlation decompositions are highlighted. The former provides an unconstrained and statistically interpretable reparameterization, and guarantees the positive-definiteness of the estimated covariance matrix. It reduces the unintuitive task of covariance estimation to that of modeling a sequence of regressions at the cost of imposing an a priori order among the variables. Elementwise regularization of the sample covariance matrix such as banding, tapering and thresholding has desirable asymptotic properties and the sparse estimated covariance matrix is positive definite with probability tending to one for large samples and dimensions.Comment: Published in at http://dx.doi.org/10.1214/11-STS358 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Comparing Forecasts of Extremely Large Conditional Covariance Matrices

    Get PDF
    Modelling and forecasting high dimensional covariance matrices is a key challenge in data-richenvironments involving even thousands of time series since most of the available models sufferfrom the curse of dimensionality. In this paper, we challenge some popular multivariate GARCH(MGARCH) and Stochastic Volatility (MSV) models by fitting them to forecast the conditionalcovariance matrices of financial portfolios with dimension up to 1000 assets observed daily over a30-year time span. The time evolution of the conditional variances and covariances estimated bythe different models is compared and evaluated in the context of a portfolio selection exercise. Weconclude that, in a realistic context in which transaction costs are taken into account, modelling thecovariance matrices as latent Wishart processes delivers more stable optimal portfolio compositionsand, consequently, higher Sharpe ratios.Guilherme V. Moura is supported by the Brazilian Government through grants number 424942- 2016-0 (CNPQ) and 302865-2016-0 (CNPQ). André A.P. Santos is supported by the Brazilian Government through grants number 303688-2016-5 (CNPQ) and 420038-2018-3 (CNPQ). Esther Ruiz is supported by the Spanish Government through grant number ECO2015-70331-C2-2-R (MINECO/FEDER)

    Calibration of shrinkage estimators for portfolio optimization

    Get PDF
    Shrinkage estimators is an area widely studied in statistics. In this paper, we contemplate the role of shrinkage estimators on the construction of the investor's portfolio. We study the performance of shrinking the sample moments to estimate portfolio weights as well as the performance of shrinking the naive sample portfolio weights themselves. We provide a theoretical and empirical analysis of different new methods to calibrate shrinkage estimators within portfolio optimizatio

    A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    Full text link
    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data

    Large dynamic covariance matrices: Enhancements based on intraday data

    Full text link
    Multivariate GARCH models do not perform well in large dimensions due to the so-called curse of dimensionality. The recent DCC-NL model of Engle et al. (2019) is able to overcome this curse via nonlinear shrinkage estimation of the unconditional correlation matrix. In this paper, we show how performance can be increased further by using open/high/low/close (OHLC) price data instead of simply using daily returns. A key innovation, for the improved modeling of not only dynamic variances but also of dynamic correlations, is the concept of a regularized return, obtained from a volatility proxy in conjunction with a smoothed sign of the observed return
    corecore