10 research outputs found

    RM-CVaR: Regularized Multiple β\beta-CVaR Portfolio

    Full text link
    The problem of finding the optimal portfolio for investors is called the portfolio optimization problem. Such problem mainly concerns the expectation and variability of return (i.e., mean and variance). Although the variance would be the most fundamental risk measure to be minimized, it has several drawbacks. Conditional Value-at-Risk (CVaR) is a relatively new risk measure that addresses some of the shortcomings of well-known variance-related risk measures, and because of its computational efficiencies, it has gained popularity. CVaR is defined as the expected value of the loss that occurs beyond a certain probability level (β\beta). However, portfolio optimization problems that use CVaR as a risk measure are formulated with a single β\beta and may output significantly different portfolios depending on how the β\beta is selected. We confirm even small changes in β\beta can result in huge changes in the whole portfolio structure. In order to improve this problem, we propose RM-CVaR: Regularized Multiple β\beta-CVaR Portfolio. We perform experiments on well-known benchmarks to evaluate the proposed portfolio. Compared with various portfolios, RM-CVaR demonstrates a superior performance of having both higher risk-adjusted returns and lower maximum drawdown.Comment: accepted by the IJCAI-PRICAI 2020 Special Track AI in FinTec

    Machine Learning and Portfolio Optimization

    Get PDF
    The portfolio optimization model has limited impact in practice due to estimation issues when applied with real data. To address this, we adapt two machine learning methods, regularization and cross-validation, for portfolio optimization. First, we introduce performance-based regularization (PBR), where the idea is to constrain the sample variances of the estimated portfolio risk and return, which steers the solution towards one associated with less estimation error in the performance. We consider PBR for both mean-variance and mean-CVaR problems. For the mean-variance problem, PBR introduces a quartic polynomial constraint, for which we make two convex approximations: one based on rank-1 approximation and another based on a convex quadratic approximation. The rank-1 approximation PBR adds a bias to the optimal allocation, and the convex quadratic approximation PBR shrinks the sample covariance matrix. For the mean-CVaR problem, the PBR model is a combinatorial optimization problem, but we prove its convex relaxation, a QCQP, is essentially tight. We show that the PBR models can be cast as robust optimization problems with novel uncertainty sets and establish asymptotic optimality of both Sample Average Approximation (SAA) and PBR solutions and the corresponding efficient frontiers. To calibrate the right hand sides of the PBR constraints, we develop new, performance-based k-fold cross-validation algorithms. Using these algorithms, we carry out an extensive empirical investigation of PBR against SAA, as well as L1 and L2 regularizations and the equally-weighted portfolio. We find that PBR dominates all other benchmarks for two out of three of Fama-French data sets

    Uncertainty Propagation and Dynamic Robust Risk Measures

    Full text link
    We introduce a framework for quantifying propagation of uncertainty arising in a dynamic setting. Specifically, we define dynamic uncertainty sets designed explicitly for discrete stochastic processes over a finite time horizon. These dynamic uncertainty sets capture the uncertainty surrounding stochastic processes and models, accounting for factors such as distributional ambiguity. Examples of uncertainty sets include those induced by the Wasserstein distance and ff-divergences. We further define dynamic robust risk measures as the supremum of all candidates' risks within the uncertainty set. In an axiomatic way, we discuss conditions on the uncertainty sets that lead to well-known properties of dynamic robust risk measures, such as convexity and coherence. Furthermore, we discuss the necessary and sufficient properties of dynamic uncertainty sets that lead to time-consistencies of robust dynamic risk measures. We find that uncertainty sets stemming from ff-divergences lead to strong time-consistency while the Wasserstein distance results in a new notion of non-normalised time-consistency. Moreover, we show that a dynamic robust risk measure is strong or non-normalised time-consistent if and only if it admits a recursive representation of one-step conditional robust risk measures arising from static uncertainty sets

    Technical note: a robust perspective on transaction costs in portfolio optimization

    Get PDF
    We prove that the portfolio problem with transaction costs is equivalent to three different problems designed to alleviate the impact of estimation error: a robust portfolio optimization problem, a regularized regression problem, and a Bayesian portfolio problem. Motivated by these results, we propose a data-driven approach to portfolio optimization that tackles transaction costs and estimation error simultaneously by treating the transaction costs as a regularization term to be calibrated. Our empirical results demonstrate that the data-driven portfolios perform favorably because they strike an optimal trade-off between rebalancing the portfolio to capture the information in recent historical return data, and avoiding the large transaction costs and impact of estimation error associated with excessive trading

    Mean-Covariance Robust Risk Measurement

    Full text link
    We introduce a universal framework for mean-covariance robust risk measurement and portfolio optimization. We model uncertainty in terms of the Gelbrich distance on the mean-covariance space, along with prior structural information about the population distribution. Our approach is related to the theory of optimal transport and exhibits superior statistical and computational properties than existing models. We find that, for a large class of risk measures, mean-covariance robust portfolio optimization boils down to the Markowitz model, subject to a regularization term given in closed form. This includes the finance standards, value-at-risk and conditional value-at-risk, and can be solved highly efficiently
    corecore