2,730 research outputs found

    Optimization with multivariate conditional value-at-risk constraints

    Get PDF
    For many decision making problems under uncertainty, it is crucial to develop risk-averse models and specify the decision makers' risk preferences based on multiple stochastic performance measures (or criteria). Incorporating such multivariate preference rules into optimization models is a fairly recent research area. Existing studies focus on extending univariate stochastic dominance rules to the multivariate case. However, enforcing multivariate stochastic dominance constraints can often be overly conservative in practice. As an alternative, we focus on the widely-applied risk measure conditional value-at-risk (CVaR), introduce a multivariate CVaR relation, and develop a novel optimization model with multivariate CVaR constraints based on polyhedral scalarization. To solve such problems for finite probability spaces we develop a cut generation algorithm, where each cut is obtained by solving a mixed integer problem. We show that a multivariate CVaR constraint reduces to finitely many univariate CVaR constraints, which proves the finite convergence of our algorithm. We also show that our results can be naturally extended to a wider class of coherent risk measures. The proposed approach provides a flexible, and computationally tractable way of modeling preferences in stochastic multi-criteria decision making. We conduct a computational study for a budget allocation problem to illustrate the effect of enforcing multivariate CVaR constraints and demonstrate the computational performance of the proposed solution methods

    Optimization with multivariate conditional value-at-risk constraints

    Get PDF
    For many decision making problems under uncertainty, it is crucial to develop risk-averse models and specify the decision makers' risk preferences based on multiple stochastic performance measures (or criteria). Incorporating such multivariate preference rules into optimization models is a fairly recent research area. Existing studies focus on extending univariate stochastic dominance rules to the multivariate case. However, enforcing multivariate stochastic dominance constraints can often be overly conservative in practice. As an alternative, we focus on the widely-applied risk measure conditional value-at-risk (CVaR), introduce a multivariate CVaR relation, and develop a novel optimization model with multivariate CVaR constraints based on polyhedral scalarization. To solve such problems for finite probability spaces we develop a cut generation algorithm, where each cut is obtained by solving a mixed integer problem. We show that a multivariate CVaR constraint reduces to finitely many univariate CVaR constraints, which proves the finite convergence of our algorithm. We also show that our results can be naturally extended to a wider class of coherent risk measures. The proposed approach provides a flexible, and computationally tractable way of modeling preferences in stochastic multi-criteria decision making. We conduct a computational study for a budget allocation problem to illustrate the effect of enforcing multivariate CVaR constraints and demonstrate the computational performance of the proposed solution methods

    Portfolio selection models: A review and new directions

    Get PDF
    Modern Portfolio Theory (MPT) is based upon the classical Markowitz model which uses variance as a risk measure. A generalization of this approach leads to mean-risk models, in which a return distribution is characterized by the expected value of return (desired to be large) and a risk value (desired to be kept small). Portfolio choice is made by solving an optimization problem, in which the portfolio risk is minimized and a desired level of expected return is specified as a constraint. The need to penalize different undesirable aspects of the return distribution led to the proposal of alternative risk measures, notably those penalizing only the downside part (adverse) and not the upside (potential). The downside risk considerations constitute the basis of the Post Modern Portfolio Theory (PMPT). Examples of such risk measures are lower partial moments, Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR). We revisit these risk measures and the resulting mean-risk models. We discuss alternative models for portfolio selection, their choice criteria and the evolution of MPT to PMPT which incorporates: utility maximization and stochastic dominance

    Processing second-order stochastic dominance models using cutting-plane representations

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the links below. Copyright @ 2011 Springer-VerlagSecond-order stochastic dominance (SSD) is widely recognised as an important decision criterion in portfolio selection. Unfortunately, stochastic dominance models are known to be very demanding from a computational point of view. In this paper we consider two classes of models which use SSD as a choice criterion. The first, proposed by Dentcheva and Ruszczyński (J Bank Finance 30:433–451, 2006), uses a SSD constraint, which can be expressed as integrated chance constraints (ICCs). The second, proposed by Roman et al. (Math Program, Ser B 108:541–569, 2006) uses SSD through a multi-objective formulation with CVaR objectives. Cutting plane representations and algorithms were proposed by Klein Haneveld and Van der Vlerk (Comput Manage Sci 3:245–269, 2006) for ICCs, and by Künzi-Bay and Mayer (Comput Manage Sci 3:3–27, 2006) for CVaR minimization. These concepts are taken into consideration to propose representations and solution methods for the above class of SSD based models. We describe a cutting plane based solution algorithm and outline implementation details. A computational study is presented, which demonstrates the effectiveness and the scale-up properties of the solution algorithm, as applied to the SSD model of Roman et al. (Math Program, Ser B 108:541–569, 2006).This study was funded by OTKA, Hungarian National Fund for Scientific Research, project 47340; by Mobile Innovation Centre, Budapest University of Technology, project 2.2; Optirisk Systems, Uxbridge, UK and by BRIEF (Brunel University Research Innovation and Enterprise Fund)

    Polyhedral Coherent Risk Measures, Portfolio Optimization and Investment Allocation Problems

    Get PDF
    The class of polyhedral coherent risk measures that could be used in decision- making under uncertainty is studied. Properties of these measures and invariant operations are considered. Portfolio optimization problems on the return -risk ratio using these risk measures are analyzed. The developed mathematical technique allows to solve large-scale portfolio problems by standard linear programming methods as an example of applications, investment allocation problems under risk of catastrophic floods are considered

    Inclusion of 9 mm Firearm Type Using Quantitative Class Characteristics

    Get PDF
    The results of a data set of five models of 9 mm Luger caliber handguns suggest an algorithm such as the square of the Mahalanobis Distance as a step towards the creation of an electronic class characteristic database for cartridge cases to supplement the currently existing General Rifling Characteristic (GRC) database for bullets. The algorithm was validated using both hold-one out cross validation as well as on an entirely independent set of ground truth known cartridge cases which were blindly classified to mimic case work. A method for determining an objective threshold for inclusion onto a list for an investigator is proposed and the effects are illustrated using the blind set of hypothetical case work. The algorithm relied upon quantitative measurements of class characteristics. Three firearms per model and ten test fires per firearm were used to inform the algorithm of the mean and variance of the measurements taken. The test fires used to inform the algorithm were a combination of physical test fires and previous IBISRTM entries where ground truth model was known. Measurements were taken of images which were retained in a digital cloud filing system where folders were used to organize test fires by the known donor model and firearm. Hold-one out cross validation was performed by withholding the measurements for a given test fire to serve as a questioned cartridge case, and computing the Mahalanobis Distance to each model. A threshold cut-off distance for inclusion on to a list which would be provided to an investigator was calculated based upon the results of the hold-one out cross validation and based upon the known-match Mahalanobis Distances following a central Chi-Square distribution. This threshold cut-off distance was used to guide decisions during the blind and independent classification of individual, physical cartridge cases. Each blind cartridge case was classified one at a time, independent of GRC bullet information, to mimic a crime scene where only one cartridge case is recovered. The blind set also included cartridge cases originating from models outside of the five considered by the algorithm or database, representative of real challenges experienced in case work

    Market Efficiency of Oil Spot and Futures: A Mean-Variance and Stochastic Dominance Approach

    Get PDF
    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing in spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.Stochastic dominance; risk averter; oil futures market; market efficiency

    Three studies on risk measures : a focus on the comonotonic additivity property

    Get PDF
    The theory of risk measures has grown enormously in the last twenty years. In particular, risk measures satisfying the axiom of comonotonic additivity were extensively studied, arguably because of the affluence of results indicating interesting aspects of such risk measures. Recent research, however, has shown that this axiom is incompatible with properties that are central in specific contexts. In this paper we present a literature review of these incompatibilities. As a secondary contribution, we show that the comonotonic additivity axiom conflicts with the property of excess invariance for risk measures and, in a milder form, with the property of surplus invariance for acceptance sets. An elementary fact in the theory of risk measures is that acceptance sets induce risk measures and vice-versa. We present simple and yet general conditions on the acceptance sets under which their induced risk measures are comonotonic additive. With this result, we believe to fill a gap in the literature linking the properties of acceptance sets and risk measures: we show that acceptance sets induce comonotonic additive risk measures if the acceptance sets and their complements are stable under convex combinations of comonotonic random variables. As an extension of our results, we obtain a set of axioms on acceptance sets that allows one to induce risk measures that are additive for a priori chosen classes of random variables. Examples of such classes that were previously considered in the literature are independent random variables, uncorrelated random variables, and notably, comonotonic random variables. Taking investment decisions requires managers to consider how the current portfolio would be affected by the inclusion of other assets. In particular, it is of interest to know if adding a given asset would increase or decrease the risk of the current portfolio. However, this addition may reduce or increase the risk, depending on the risk measure being used. Arguably, risk sub-estimation is a major concern to regulatory agencies, and possibly to the financial firms themselves. To provide a more decisive and conservative conclusion about the effect of an additional asset on the risk of the current portfolio, we propose to assess this effect through the family of monetary risk measures that are consistent with second-degree stochastic dominance (SSD-consistent risk measures). This criterion provides a tool to identify financial positions that reduce the risk of the current portfolio, according to all monetary SSD-consistent risk measures. Also, this tool measures the smallest amount of money (the cost) necessary to turn the financial positions into risk reducers for the original portfolio. We characterize the cost of robust risk reduction through a monetary risk measure, a monetary acceptance set, the family of average values at risk, and through the infimum of the certainty equivalents of risk-averse agents with random initial wealth

    New approaches to Risk Management and Scenario Approximation in Financial Optimization

    Get PDF
    The first part of the thesis addresses the problem of risk management in financial optimization modeling. Motivation for constructing a new concept of risk measurement is given through the history of development: utility theory, risk/return tradeoff, and coherent risk measures. The process of describing investor\u27s preferences is presented through the proposed collection of Rational Level Sets (RLS). Based on RLS, a new concept termed Rational Risk Measures (RRM) for nancial optimization models is defined. The advantages of RRM over coherent risk measures are discussed. Approximation of a given set of scenarios using tail information is addressed in the second part of the thesis. Motivation for the scenario approximation problem, as a way of reducing computation time and preserving solution accuracy, is given through examples of financial optimization and asset allocation models. Using the basic ideas of Conditional Value at Risk (CVaR), this thesis develops a new methodology for scenario approximation for stochastic portfolio optimization. First, the concepts termed Scenarios-at-Risk (SaR) and Scenarios-at-Gain (SaG) are proposed as for the purpose of partitioning the underlying multivariate domain for a xed investment portfolio and a fixed probability level of CVaR. Then, under a given set of CVaR values, a twostage method is developed for determining a smaller, and discrete, set of scenarios over which CVaR risk control is satisfied for all portfolios of interest. Convergence of the method is shown and numerical results are presented to validate the proposed technique
    corecore