5,018 research outputs found

    Portfolio choice and estimation risk : a comparison of Bayesian approaches to resampled efficiency

    Get PDF
    Estimation risk is known to have a huge impact on mean/variance (MV) optimized portfolios, which is one of the primary reasons to make standard Markowitz optimization unfeasible in practice. Several approaches to incorporate estimation risk into portfolio selection are suggested in the earlier literature. These papers regularly discuss heuristic approaches (e.g., placing restrictions on portfolio weights) and Bayesian estimators. Among the Bayesian class of estimators, we will focus in this paper on the Bayes/Stein estimator developed by Jorion (1985, 1986), which is probably the most popular estimator. We will show that optimal portfolios based on the Bayes/Stein estimator correspond to portfolios on the original mean-variance efficient frontier with a higher risk aversion. We quantify this increase in risk aversion. Furthermore, we review a relatively new approach introduced by Michaud (1998), resampling efficiency. Michaud argues that the limitations of MV efficiency in practice generally derive from a lack of statistical understanding of MV optimization. He advocates a statistical view of MV optimization that leads to new procedures that can reduce estimation risk. Resampling efficiency has been contrasted to standard Markowitz portfolios until now, but not to other approaches which explicitly incorporate estimation risk. This paper attempts to fill this gap. Optimal portfolios based on the Bayes/Stein estimator and resampling efficiency are compared in an empirical out-of-sample study in terms of their Sharpe ratio and in terms of stochastic dominance

    Statistical inference for the EU portfolio in high dimensions

    Full text link
    In this paper, using the shrinkage-based approach for portfolio weights and modern results from random matrix theory we construct an effective procedure for testing the efficiency of the expected utility (EU) portfolio and discuss the asymptotic behavior of the proposed test statistic under the high-dimensional asymptotic regime, namely when the number of assets pp increases at the same rate as the sample size nn such that their ratio p/np/n approaches a positive constant c(0,1)c\in(0,1) as nn\to\infty. We provide an extensive simulation study where the power function and receiver operating characteristic curves of the test are analyzed. In the empirical study, the methodology is applied to the returns of S\&P 500 constituents.Comment: 27 pages, 5 figures, 2 table

    Tracking Error and Active Portfolio Management

    Get PDF
    Persistent bear market conditions have led to a shift of focus in the tracking error literature. Until recently the portfolio allocation literature focused on tracking error minimization as a consequence of passive benckmark management under portfolio weights, transaction costs and short selling constraints. Abysmal benchmark performance shifted the literature's focus towards active portfolio strategies that aim at beating the benchmark while keeping tracking error within acceptable bounds. We investigate an active (dynamic) portfolio allocation strategy that exploits the predictability in the conditional variance-covariance matrix of asset returns. To illustrate our procedure we use Jorion's (2002) tracking error frontier methodology. We apply our model to a representative portfolio of Australian stocks over the period January 1999 through November 2002.

    Dynamic modeling of mean-reverting spreads for statistical arbitrage

    Full text link
    Statistical arbitrage strategies, such as pairs trading and its generalizations, rely on the construction of mean-reverting spreads enjoying a certain degree of predictability. Gaussian linear state-space processes have recently been proposed as a model for such spreads under the assumption that the observed process is a noisy realization of some hidden states. Real-time estimation of the unobserved spread process can reveal temporary market inefficiencies which can then be exploited to generate excess returns. Building on previous work, we embrace the state-space framework for modeling spread processes and extend this methodology along three different directions. First, we introduce time-dependency in the model parameters, which allows for quick adaptation to changes in the data generating process. Second, we provide an on-line estimation algorithm that can be constantly run in real-time. Being computationally fast, the algorithm is particularly suitable for building aggressive trading strategies based on high-frequency data and may be used as a monitoring device for mean-reversion. Finally, our framework naturally provides informative uncertainty measures of all the estimated parameters. Experimental results based on Monte Carlo simulations and historical equity data are discussed, including a co-integration relationship involving two exchange-traded funds.Comment: 34 pages, 6 figures. Submitte

    In Defense of Portfolio Optimization: What If We Can Forecast?

    Get PDF
    We challenge academic consensus that estimation error makes mean-variance portfolio strategies inferior to passive equal-weighted approaches. We demonstrate analytically, via simulation and empirically that investors endowed with modest forecasting ability benefit substantially from an MV approach. An investor with some forecasting ability improves expected utility by increasing the number of assets considered. We frame our study realistically using budget constraints, transaction costs and out-of-sample testing for a wide range of investments. We derive practical decision rules to choose between passive and mean variance optimisation results and generate results consistent with much financial market practice and the original Markowitz formulation

    Statistical Estimation for Covariance Structures with Tail Estimates using Nodewise Quantile Predictive Regression Models

    Full text link
    This paper considers the specification of covariance structures with tail estimates. We focus on two aspects: (i) the estimation of the VaR-CoVaR risk matrix in the case of larger number of time series observations than assets in a portfolio using quantile predictive regression models without assuming the presence of nonstationary regressors and; (ii) the construction of a novel variable selection algorithm, so-called, Feature Ordering by Centrality Exclusion (FOCE), which is based on an assumption-lean regression framework, has no tuning parameters and is proved to be consistent under general sparsity assumptions. We illustrate the usefulness of our proposed methodology with numerical studies of real and simulated datasets when modelling systemic risk in a network
    corecore