2,762 research outputs found

    Estimating High Dimensional Covariance Matrices and its Applications

    Get PDF
    Estimating covariance matrices is an important part of portfolio selection, risk management, and asset pricing. This paper reviews the recent development in estimating high dimensional covariance matrices, where the number of variables can be greater than the number of observations. The limitations of the sample covariance matrix are discussed. Several new approaches are presented, including the shrinkage method, the observable and latent factor method, the Bayesian approach, and the random matrix theory approach. For each method, the construction of covariance matrices is given. The relationships among these methods are discussed.Factor analysis, Principal components, Singular value decomposition, Random matrix theory, Empirical Bayes, Shrinkage method, Optimal portfolios, CAPM, APT, GMM

    Sparse and stable Markowitz portfolios

    Full text link
    We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e. portfolios with only few active positions), and allows to account for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naive evenly-weighted portfolio which constitutes, as shown in recent literature, a very tough benchmark.Comment: Better emphasis of main result, new abstract, new examples and figures. New appendix with full details of algorithm. 17 pages, 6 figure

    Sectoral portfolio optimization by judicious selection of financial ratios via PCA

    Full text link
    Embedding value investment in portfolio optimization models has always been a challenge. In this paper, we attempt to incorporate it by first employing principal component analysis (PCA) sector wise to filter out dominant financial ratios from each sector and thereafter, use the portfolio optimization model incorporating second order stochastic dominance (SSD) criteria to derive the final optimal investment. We consider a total of 11 well known financial ratios corresponding to each sector representing four categories of ratios, namely liquidity, solvency, profitability, and valuation. PCA is then applied sector wise over a period of 10 years from April 2004 to March 2014 to extract dominant ratios from each sector in two ways, one from the component solution and other from each category on the basis of their communalities. The two step Sectoral Portfolio Optimization (SPO) model integrating the SSD criteria in constraints is then utilized to build an optimal portfolio. The strategy formed using the former extracted ratios is termed as PCA-SPO(A) and the latter one as PCA-SPO(B). The results obtained from the proposed strategies are compared with the SPO model and two nominal SSD models, with and without financial ratios for computational study. Empirical performance of proposed strategies is assessed over the period of 6 years from April 2014 to March 2020 using a rolling window scheme with varying out-of-sample time line of 3, 6, 9, 12 and 24 months for S&P BSE 500 market. We observe that the proposed strategy PCA-SPO(B) outperforms all other models in terms of downside deviation, CVaR, VaR, Sortino ratio, Rachev ratio, and STARR ratios over almost all out-of-sample periods. This highlights the importance of value investment where ratios are carefully selected and embedded quantitatively in portfolio selection process.Comment: 26 pages, 12 table

    Hedge fund return predictability; To combine forecasts or combine information?

    Get PDF
    While the majority of the predictability literature has been devoted to the predictability of traditional asset classes, the literature on the predictability of hedge fund returns is quite scanty. We focus on assessing the out-of-sample predictability of hedge fund strategies by employing an extensive list of predictors. Aiming at reducing uncertainty risk associated with a single predictor model, we first engage into combining the individual forecasts. We consider various combining methods ranging from simple averaging schemes to more sophisticated ones, such as discounting forecast errors, cluster combining and principal components combining. Our second approach combines information of the predictors and applies kitchen sink, bootstrap aggregating (bagging), lasso, ridge and elastic net specifications. Our statistical and economic evaluation findings point to the superiority of simple combination methods. We also provide evidence on the use of hedge fund return forecasts for hedge fund risk measurement and portfolio allocation. Dynamically constructing portfolios based on the combination forecasts of hedge funds returns leads to considerably improved portfolio performance

    Sparse and stable Markowitz portfolios

    Get PDF
    We consider the problem of portfolio selection within the classical Markowitz meanvariance optimizing framework, which has served as the basis for modern portfolio theory for more than 50 years. Efforts to translate this theoretical foundation into a viable portfolio construction algorithm have been plagued by technical difficulties stemming from the instability of the original optimization problem with respect to the available data. Often, instabilities of this type disappear when a regularizing constraint or penalty term is incorporated in the optimization procedure. This approach seems not to have been used in portfolio design until very recently. To provide such a stabilization, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights. This penalty stabilizes the optimization problem, automatically encourages sparse portfolios, and facilitates an effective treatment of transaction costs. We implement our methodology using as our securities two sets of portfolios constructed by Fama and French: the 48 industry portfolios and 100 portfolios formed on size and book-to-market. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naĂŻve portfolio comprising equal investments in each available asset. In addition to their excellent performance, these portfolios have only a small number of active positions, a desirable feature for small investors, for whom the fixed overhead portion of the transaction cost is not negligible. JEL Classification: G11, C00Penalized Regression, Portfolio Choice, Sparse Portfolio

    Subspace methods for portfolio design

    Get PDF
    Financial signal processing (FSP) is one of the emerging areas in the field of signal processing. It is comprised of mathematical finance and signal processing. Signal processing engineers consider speech, image, video, and price of a stock as signals of interest for the given application. The information that they will infer from raw data is different for each application. Financial engineers develop new solutions for financial problems using their knowledge base in signal processing. The goal of financial engineers is to process the harvested financial signal to get meaningful information for the purpose. Designing investment portfolios have always been at the center of finance. An investment portfolio is comprised of financial instruments such as stocks, bonds, futures, options, and others. It is designed based on risk limits and return expectations of investors and managed by portfolio managers. Modern Portfolio Theory (MPT) offers a mathematical method for portfolio optimization. It defines the risk as the standard deviation of the portfolio return and provides closed-form solution for the risk optimization problem where asset allocations are derived from. The risk and the return of an investment are the two inseparable performance metrics. Therefore, risk normalized return, called Sharpe ratio, is the most widely used performance metric for financial investments. Subspace methods have been one of the pillars of functional analysis and signal processing. They are used for portfolio design, regression analysis and noise filtering in finance applications. Each subspace has its unique characteristics that may serve requirements of a specific application. For still image and video compression applications, Discrete Cosine Transform (DCT) has been successfully employed in transform coding where Karhunen-Loeve Transform (KLT) is the optimum block transform. In this dissertation, a signal processing framework to design investment portfolios is proposed. Portfolio theory and subspace methods are investigated and jointly treated. First, KLT, also known as eigenanalysis or principal component analysis (PCA) of empirical correlation matrix for a random vector process that statistically represents asset returns in a basket of instruments, is investigated. Auto-regressive, order one, AR(1) discrete process is employed to approximate such an empirical correlation matrix. Eigenvector and eigenvalue kernels of AR(1) process are utilized for closed-form expressions of Sharpe ratios and market exposures of the resulting eigenportfolios. Their performances are evaluated and compared for various statistical scenarios. Then, a novel methodology to design subband/filterbank portfolios for a given empirical correlation matrix by using the theory of optimal filter banks is proposed. It is a natural extension of the celebrated eigenportfolios. Closed-form expressions for Sharpe ratios and market exposures of subband/filterbank portfolios are derived and compared with eigenportfolios. A simple and powerful new method using the rate-distortion theory to sparse eigen-subspaces, called Sparse KLT (SKLT), is developed. The method utilizes varying size mid-tread (zero-zone) pdf-optimized (Lloyd-Max) quantizers created for each eigenvector (or for the entire eigenmatrix) of a given eigen-subspace to achieve the desired cardinality reduction. The sparsity performance comparisons demonstrate the superiority of the proposed SKLT method over the popular sparse representation algorithms reported in the literature

    Accurate portfolio risk-return structure modelling

    Get PDF
    Markowitz's modem portfolio theory has played a vital role in investment portfolio management, which is constantly pushing the development on volatility models. Particularly, the stochastic volatility model which reveals the dynamics of conditional volatility. Financial time series and volatility models has become one of the hot spots in operations research. In this thesis, one of the areas we explore is the theoretical formulation of the optimal portfolio selection problem under Ito calculus framework. Particularly, a stochastic variation calculus problem, i.e., seeking the optimal stochastic volatility diffusion family for facilitating the best portfolio selection identified under the continuous-time stochastic optimal control theoretical settings. One of the properties this study examines is the left-shifting role of the GARCH(1, 1) (General Autoregressive Conditional Heteroskedastic) model's efficient frontier. This study considers many instances where the left shifting superior behaviour of the GARCH(1, 1) is observed. One such instance is when GARCH(1, 1) is compared within the volatility modelling extensions of the GARCH environ in a single index framework. This study will demonstrate the persistence of the superiority of the G ARCH ( 1, 1) frontier within a multiple and single index context of modem portfolio theory. Many portfolio optimization models are investigated, particularly the Markowitz model and the Sharpe Multiple and Single index models. Includes bibliographical references (p. 313-323)
    • 

    corecore