76 research outputs found

    Robust mean‑to‑CVaR optimization under ambiguity in distributions means and covariance

    Get PDF
    We develop a robust mean-to-CVaR portfolio optimization model under interval ambiguity in returns means and covariance. The robust model satisfies second-order stochastic dominance consistency and is formulated as a semi-definite cone program. We use two controlled experiments to document the sensitivity of the optimal allocations to the ambiguity when asset correlation varies, and to the ambiguity intervals. We find that means ambiguity has a higher impact than covariance ambiguity. We apply the model to US equities data to corroborate works showing that ambiguity in mean returns induces a home bias; it can explain the puzzle in a two-country setting but not with three countries. We further establish that covariance ambiguity also induces bias, but with lower impact that can not explain the puzzle. Our results suggest what is needed for the ambiguity channel to provide a full explanation of the puzzle. The findings are robust to alternative model specifications and outliers

    Theory and Applications of Robust Optimization

    Full text link
    In this paper we survey the primary research, both theoretical and applied, in the area of Robust Optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multi-stage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.Comment: 50 page

    Distributionally Robust Optimization: A Review

    Full text link
    The concepts of risk-aversion, chance-constrained optimization, and robust optimization have developed significantly over the last decade. Statistical learning community has also witnessed a rapid theoretical and applied growth by relying on these concepts. A modeling framework, called distributionally robust optimization (DRO), has recently received significant attention in both the operations research and statistical learning communities. This paper surveys main concepts and contributions to DRO, and its relationships with robust optimization, risk-aversion, chance-constrained optimization, and function regularization

    Mean-Covariance Robust Risk Measurement

    Full text link
    We introduce a universal framework for mean-covariance robust risk measurement and portfolio optimization. We model uncertainty in terms of the Gelbrich distance on the mean-covariance space, along with prior structural information about the population distribution. Our approach is related to the theory of optimal transport and exhibits superior statistical and computational properties than existing models. We find that, for a large class of risk measures, mean-covariance robust portfolio optimization boils down to the Markowitz model, subject to a regularization term given in closed form. This includes the finance standards, value-at-risk and conditional value-at-risk, and can be solved highly efficiently

    Automatic Data Processing System of Constructing an Optimal Mean/Value-at-Risk Portfolio

    Get PDF
    We propose a computer-based automatic system of share prices processing for constructing an optimal mean/Valueat-Risk portfolio with mixed integer linear programming algorithm based on Benati - Rizzi method. We investigate the impact of so-called Value at risk measure on the size of total capital and shares in the optimal risky portfolio is necessary for revising the classical approach of Markovitz and for adapting it to the modern requirements in the banking and financial sectors. In a classical way it is impossible to construct a portfolio when structural changes in the stock market are happened, or as the same, when a long fall in prices is replaced by a steady growth. Our work is devoted to the study of the construction of the risky portfolio using the Value at risk measure. Within this investigation, two portfolios are constructed according to the classical Markowitz algorithm and the Benati - Rizzi algorithm. The sample alpha and beta-coefficients are estimated, the riskiness and profitability of passive portfolio investments are calculated. The comparison of returns and values of such portfolios of shares included in Moscow index MICEX-10 was carried out

    Survey of quantitative investment strategies in the Russian stock market : Special interest in tactical asset allocation

    Get PDF
    Russia’s financial markets have been an uncharted area when it comes to exploring the performance of investment strategies based on modern portfolio theory. In this thesis, we focus on the country’s stock market and study whether profitable investments can be made while at the same time taking uncertainties, risks, and dependencies into account. We also pay particular interest in tactical asset allocation. The benefit of this approach is that we can utilize time series forecasting methods to produce trading signals in addition to optimization methods. We use two datasets in our empirical applications. The first one consists of nine sectoral indices covering the period from 2008 to 2017, and the other includes altogether 42 stocks listed on the Moscow Exchange covering the years 2011 – 2017. The strategies considered have been divided into five sections. In the first part, we study classical and robust mean-risk portfolios and the modeling of transaction costs. We find that the expected return should be maximized per unit expected shortfall while simultaneously requiring that each asset contributes equally to the portfolio’s tail risk. Secondly, we show that using robust covariance estimators can improve the risk-adjusted returns of minimum variance portfolios. Thirdly, we note that robust optimization techniques are best suited for conservative investors due to the low volatility allocations they produce. In the second part, we employ statistical factor models to estimate higher-order comoments and demonstrate the benefit of the proposed method in constructing risk-optimal and expected utility-maximizing portfolios. In the third part, we utilize the Almgren–Chriss framework and sort the expected returns according to the assumed momentum anomaly. We discover that this method produces stable allocations performing exceptionally well in the market upturn. In the fourth part, we show that forecasts produced by VECM and GARCH models can be used profitably in optimizations based on the Black–Litterman, copula opinion pooling, and entropy pooling models. In the final part, we develop a wealth protection strategy capable of timing market changes thanks to the return predictions based on an ARIMA model. Therefore, it can be stated that it has been possible to make safe and profitable investments in the Russian stock market even when reasonable transaction costs have been taken into account. We also argue that market inefficiencies could have been exploited by structuring statistical arbitrage and other tactical asset allocation-related strategies.Venäjän rahoitusmarkkinat ovat olleet kartoittamatonta aluetta tutkittaessa moderniin portfolioteoriaan pohjautuvien sijoitusstrategioiden käyttäytymistä. Tässä tutkielmassa keskitymme maan osakemarkkinoihin ja tarkastelemme, voidaanko taloudellisesti kannattavia sijoituksia tehdä otettaessa samalla huomioon epävarmuudet, riskit ja riippuvuudet. Kiinnitämme erityistä huomiota myös taktiseen varojen kohdentamiseen. Tämän lähestymistavan etuna on, että optimointimenetelmien lisäksi voimme hyödyntää aikasarjaennustamisen menetelmiä kaupankäyntisignaalien tuottamiseksi. Empiirisissä sovelluksissa käytämme kahta data-aineistoa. Ensimmäinen koostuu yhdeksästä teollisuusindeksistä kattaen ajanjakson 2008–2017, ja toinen sisältää 42 Moskovan pörssiin listattua osaketta kattaen vuodet 2011–2017. Tarkasteltavat strategiat on puolestaan jaoteltu viiteen osioon. Ensimmäisessä osassa tarkastelemme klassisia ja robusteja riski-tuotto -portfolioita sekä kaupankäyntikustannusten mallintamista. Havaitsemme, että odotettua tuottoa on syytä maksimoida suhteessa odotettuun vajeeseen edellyttäen samalla, että jokainen osake lisää sijoitussalkun häntäriskiä yhtä suurella osuudella. Toiseksi osoitamme, että minimivarianssiportfolioiden riskikorjattuja tuottoja voidaan parantaa robusteilla kovarianssiestimaattoreilla. Kolmanneksi toteamme robustien optimointitekniikoiden soveltuvan parhaiten konservatiivisille sijoittajille niiden tuottamien matalan volatiliteetin allokaatioiden ansiosta. Toisessa osassa hyödynnämme tilastollisia faktorimalleja korkeampien yhteismomenttien estimoinnissa ja havainnollistamme ehdotetun metodin hyödyllisyyttä riskioptimaalisten sekä odotettua hyötyä maksimoivien salkkujen rakentamisessa. Kolmannessa osassa käytämme Almgren–Chrissin viitekehystä ja asetamme odotetut tuotot suuruusjärjestykseen oletetun momentum-anomalian mukaisesti. Havaitsemme, että menetelmä tuottaa vakaita allokaatioita menestyen erityisen hyvin noususuhdanteessa. Neljännessä osassa osoitamme, että VECM- että GARCH-mallien tuottamia ennusteita voidaan hyödyntää kannattavasti niin Black–Littermanin malliin kuin kopulanäkemysten ja entropian poolaukseenkin perustuvissa optimoinneissa. Viimeisessä osassa laadimme varallisuuden suojausstrategian, joka kykenee ajoittamaan markkinoiden muutoksia ARIMA-malliin perustuvien tuottoennusteiden ansiosta. Voidaan siis todeta, että Venäjän osakemarkkinoilla on ollut mahdollista tehdä turvallisia ja tuottavia sijoituksia myös silloin kun kohtuulliset kaupankäyntikustannukset on huomioitu. Toiseksi väitämme, että markkinoiden tehottomuutta on voitu hyödyntää suunnittelemalla tilastolliseen arbitraasiin ja muihin taktiseen varojen allokointiin pohjautuvia strategioita

    Large-scale optimization under uncertainty: applications to logistics and healthcare

    Get PDF
    Many decision making problems in real life are affected by uncertainty. The area of optimization under uncertainty has been studied widely and deeply for over sixty years, and it continues to be an active area of research. The overall aim of this thesis is to contribute to the literature by developing (i) theoretical models that reflect problem settings closer to real life than previously considered in literature, as well as (ii) solution techniques that are scalable. The thesis focuses on two particular applications to this end, the vehicle routing problem and the problem of patient scheduling in a healthcare system. The first part of this thesis studies the vehicle routing problem, which asks for a cost-optimal delivery of goods to geographically dispersed customers. The probability distribution governing the customer demands is assumed to be unknown throughout this study. This assumption positions the study into the domain of distributionally robust optimization that has a well developed literature, but had so far not been extensively studied in the context of the capacitated vehicle routing problem. The study develops theoretical frameworks that allow for a tractable solution of such problems in the context of rise-averse optimization. The overall aim is to create a model that can be used by practitioners to solve problems specific to their requirements with minimal adaptations. The second part of this thesis focuses on the problem of scheduling elective patients within the available resources of a healthcare system so as to minimize overall years of lives lost. This problem has been well studied for a long time. The large scale of a healthcare system coupled with the inherent uncertainty affecting the evolution of a patient make this a particularly difficult problem. The aim of this study is to develop a scalable optimization model that allows for an efficient solution while at the same time enabling a flexible modelling of each patient in the system. This is achieved through a fluid approximation of the weakly-coupled counting dynamic program that arises out of modeling each patient in the healthcare system as a dynamic program with states, actions, transition probabilities and rewards reflecting the condition, treatment options and evolution of a given patient. A case-study for the National Health Service in England highlights the usefulness of the prioritization scheme obtained as a result of applying the methodology developed in this study.Open Acces
    corecore