118 research outputs found
Sparse and stable Markowitz portfolios
We consider the problem of portfolio selection within the classical Markowitz
mean-variance framework, reformulated as a constrained least-squares regression
problem. We propose to add to the objective function a penalty proportional to
the sum of the absolute values of the portfolio weights. This penalty
regularizes (stabilizes) the optimization problem, encourages sparse portfolios
(i.e. portfolios with only few active positions), and allows to account for
transaction costs. Our approach recovers as special cases the
no-short-positions portfolios, but does allow for short positions in limited
number. We implement this methodology on two benchmark data sets constructed by
Fama and French. Using only a modest amount of training data, we construct
portfolios whose out-of-sample performance, as measured by Sharpe ratio, is
consistently and significantly better than that of the naive evenly-weighted
portfolio which constitutes, as shown in recent literature, a very tough
benchmark.Comment: Better emphasis of main result, new abstract, new examples and
figures. New appendix with full details of algorithm. 17 pages, 6 figure
Sparse Index Tracking: Simultaneous Asset Selection and Capital Allocation via -Constrained Portfolio
Sparse index tracking is one of the prominent passive portfolio management
strategies that construct a sparse portfolio to track a financial index. A
sparse portfolio is desirable over a full portfolio in terms of transaction
cost reduction and avoiding illiquid assets. To enforce the sparsity of the
portfolio, conventional studies have proposed formulations based on
-norm regularizations as a continuous surrogate of the -norm
regularization. Although such formulations can be used to construct sparse
portfolios, they are not easy to use in actual investments because parameter
tuning to specify the exact upper bound on the number of assets in the
portfolio is delicate and time-consuming. In this paper, we propose a new
problem formulation of sparse index tracking using an -norm constraint
that enables easy control of the upper bound on the number of assets in the
portfolio. In addition, our formulation allows the choice between portfolio
sparsity and turnover sparsity constraints, which also reduces transaction
costs by limiting the number of assets that are updated at each rebalancing.
Furthermore, we develop an efficient algorithm for solving this problem based
on a primal-dual splitting method. Finally, we illustrate the effectiveness of
the proposed method through experiments on the S\&P500 and NASDAQ100 index
datasets.Comment: Submitted to IEEE Open Journal of Signal Processin
Sparse and stable Markowitz portfolios
We consider the problem of portfolio selection within the classical Markowitz meanvariance optimizing framework, which has served as the basis for modern portfolio theory for more than 50 years. Efforts to translate this theoretical foundation into a viable portfolio construction algorithm have been plagued by technical difficulties stemming from the instability of the original optimization problem with respect to the available data. Often, instabilities of this type disappear when a regularizing constraint or penalty term is incorporated in the optimization procedure. This approach seems not to have been used in portfolio design until very recently. To provide such a stabilization, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights. This penalty stabilizes the optimization problem, automatically encourages sparse portfolios, and facilitates an effective treatment of transaction costs. We implement our methodology using as our securities two sets of portfolios constructed by Fama and French: the 48 industry portfolios and 100 portfolios formed on size and book-to-market. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naĂŻve portfolio comprising equal investments in each available asset. In addition to their excellent performance, these portfolios have only a small number of active positions, a desirable feature for small investors, for whom the fixed overhead portion of the transaction cost is not negligible. JEL Classification: G11, C00Penalized Regression, Portfolio Choice, Sparse Portfolio
Diversity and Sparsity: A New Perspective on Index Tracking
We address the problem of partial index tracking, replicating a benchmark
index using a small number of assets. Accurate tracking with a sparse portfolio
is extensively studied as a classic finance problem. However in practice, a
tracking portfolio must also be diverse in order to minimise risk -- a
requirement which has only been dealt with by ad-hoc methods before. We
introduce the first index tracking method that explicitly optimises both
diversity and sparsity in a single joint framework. Diversity is realised by a
regulariser based on pairwise similarity of assets, and we demonstrate that
learning similarity from data can outperform some existing heuristics. Finally,
we show that the way we model diversity leads to an easy solution for sparsity,
allowing both constraints to be optimised easily and efficiently. we run
out-of-sample backtesting for a long interval of 15 years (2003 -- 2018), and
the results demonstrate the superiority of the proposed algorithm.Comment: Accepted to ICASSP 2020. 5 pages. This is a conference version of the
work, for the full version, please refer to arXiv:1809.01989v
Beating the index with deep learning:a method for passive investing and systematic active investing
Abstract. In index tracking, while the full replication requires holding all the asset constituents of the index in the tracking portfolio, the sampling approach attempts to construct a tracking portfolio with a subset of assets. Thus, sampling seems to be the approach of choice when considering the flexibility and transaction costs. Two problems that need to be solved to implement the sampling approach are asset selection and asset weighting. This study proposes a framework implemented in two stages: first selecting the assets and then determining asset componentsâ weights. This study uses a deep autoencoder model for stock selection. The study then applies the L2 regularization technique to set up a quadratic programming problem to determine investment weights of stock components.
Since the tracking portfolio tends to underperform the market index after taking management costs into accounts, the portfolio that can generate the excess returns over the index (index beating) brings more competitive advantages to passive fund managers. Thus, the proposed framework attempts to construct a portfolio with a small number of stocks that can both follow the market trends and generate excess returns over the market index.
The framework successfully constructed a portfolio with ten stocks beating the S&P 500 index in any given 1-year period with a justifiable risk level
Rigorous optimization recipes for sparse and low rank inverse problems with applications in data sciences
Many natural and man-made signals can be described as having a few degrees of freedom relative to their size due to natural parameterizations or constraints; examples include bandlimited signals, collections of signals observed from multiple viewpoints in a network-of-sensors, and per-flow traffic measurements of the Internet. Low-dimensional models (LDMs) mathematically capture the inherent structure of such signals via combinatorial and geometric data models, such as sparsity, unions-of-subspaces, low-rankness, manifolds, and mixtures of factor analyzers, and are emerging to revolutionize the way we treat inverse problems (e.g., signal recovery, parameter estimation, or structure learning) from dimensionality-reduced or incomplete data. Assuming our problem resides in a LDM space, in this thesis we investigate how to integrate such models in convex and non-convex optimization algorithms for significant gains in computational complexity. We mostly focus on two LDMs: sparsity and low-rankness. We study trade-offs and their implications to develop efficient and provable optimization algorithms, and--more importantly--to exploit convex and combinatorial optimization that can enable cross-pollination of decades of research in both
Recommended from our members
Regularization in econometrics and finance
This dissertation develops regularization methods for use in finance and econometrics problems. The key methodology introduced is utility-based selection (UBS) -- a procedure for inducing sparsity in statistical models and practical problems requiring the need for simple and parsimonious decisions.
The introduction section describes statistical model selection in light of the "big data hype" and desire to fit rich and complex models. Key emphasis is placed on the fundamental bias-variance tradeoff in statistics. The remaining portions of the introduction tie these notions into the components and procedure of UBS. This latter half frames model selection as a decision and develops the procedure using decision-theoretic principles.
The second chapter applies UBS to portfolio optimization. A dynamic portfolio construction framework is presented, and the asset returns are modeled using a Bayesian dynamic linear model. The focus here is constructing simple, or sparse, portfolios of passive funds. We consider a set of the most liquid exchange traded funds for our empirical analysis.
The third chapter discusses variable selection in seemingly unrelated regression models (SURs). UBS is applied in this context where an analyst wants to find, among p available predictors, what subset are most relevant for describing variation in q different responses. The selection procedure takes into account uncertainty in both the responses and predictors. It is applied to a popular problem in asset pricing -- discovering which factors (predictors) are relevant for pricing the cross section of asset returns (responses). We also discuss future work in monotonic function estimation and how UBS is applied in this context.
The fourth chapter considers regularization in treatment effect estimation using linear regression. It introduces "regularization-induced confounding" (RIC), a pitfall of employing naive regularization techniques for estimating a treatment effect from observational data. A new model parameterization is presented that mitigates RIC. Additionally, we discuss recent work that considers uncertainty characterization when model errors may vary by clusters of data. These developments employ empirical-Bayes and bootstrapping techniques.Information, Risk, and Operations Management (IROM
- âŠ