21,539 research outputs found

    Processing second-order stochastic dominance models using cutting-plane representations

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the links below. Copyright @ 2011 Springer-VerlagSecond-order stochastic dominance (SSD) is widely recognised as an important decision criterion in portfolio selection. Unfortunately, stochastic dominance models are known to be very demanding from a computational point of view. In this paper we consider two classes of models which use SSD as a choice criterion. The first, proposed by Dentcheva and Ruszczyński (J Bank Finance 30:433–451, 2006), uses a SSD constraint, which can be expressed as integrated chance constraints (ICCs). The second, proposed by Roman et al. (Math Program, Ser B 108:541–569, 2006) uses SSD through a multi-objective formulation with CVaR objectives. Cutting plane representations and algorithms were proposed by Klein Haneveld and Van der Vlerk (Comput Manage Sci 3:245–269, 2006) for ICCs, and by Künzi-Bay and Mayer (Comput Manage Sci 3:3–27, 2006) for CVaR minimization. These concepts are taken into consideration to propose representations and solution methods for the above class of SSD based models. We describe a cutting plane based solution algorithm and outline implementation details. A computational study is presented, which demonstrates the effectiveness and the scale-up properties of the solution algorithm, as applied to the SSD model of Roman et al. (Math Program, Ser B 108:541–569, 2006).This study was funded by OTKA, Hungarian National Fund for Scientific Research, project 47340; by Mobile Innovation Centre, Budapest University of Technology, project 2.2; Optirisk Systems, Uxbridge, UK and by BRIEF (Brunel University Research Innovation and Enterprise Fund)

    Reduced Complexity Filtering with Stochastic Dominance Bounds: A Convex Optimization Approach

    Full text link
    This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds

    Comparative Analyses of Expected Shortfall and Value-at-Risk (2): Expected Utility Maximization and Tail Risk

    Get PDF
    We compare expected shortfall and value-at-risk (VaR) in terms of consistency with expected utility maximization and elimination of tail risk. We use the concept of stochastic dominance in studying these two aspects of risk measures. We conclude that expected shortfall is more applicable than VaR in those two aspects. Expected shortfall is consistent with expected utility maximization and is free of tail risk, under more lenient conditions than VaR.

    Optimization with multivariate conditional value-at-risk constraints

    Get PDF
    For many decision making problems under uncertainty, it is crucial to develop risk-averse models and specify the decision makers' risk preferences based on multiple stochastic performance measures (or criteria). Incorporating such multivariate preference rules into optimization models is a fairly recent research area. Existing studies focus on extending univariate stochastic dominance rules to the multivariate case. However, enforcing multivariate stochastic dominance constraints can often be overly conservative in practice. As an alternative, we focus on the widely-applied risk measure conditional value-at-risk (CVaR), introduce a multivariate CVaR relation, and develop a novel optimization model with multivariate CVaR constraints based on polyhedral scalarization. To solve such problems for finite probability spaces we develop a cut generation algorithm, where each cut is obtained by solving a mixed integer problem. We show that a multivariate CVaR constraint reduces to finitely many univariate CVaR constraints, which proves the finite convergence of our algorithm. We also show that our results can be naturally extended to a wider class of coherent risk measures. The proposed approach provides a flexible, and computationally tractable way of modeling preferences in stochastic multi-criteria decision making. We conduct a computational study for a budget allocation problem to illustrate the effect of enforcing multivariate CVaR constraints and demonstrate the computational performance of the proposed solution methods

    Portfolio selection models: A review and new directions

    Get PDF
    Modern Portfolio Theory (MPT) is based upon the classical Markowitz model which uses variance as a risk measure. A generalization of this approach leads to mean-risk models, in which a return distribution is characterized by the expected value of return (desired to be large) and a risk value (desired to be kept small). Portfolio choice is made by solving an optimization problem, in which the portfolio risk is minimized and a desired level of expected return is specified as a constraint. The need to penalize different undesirable aspects of the return distribution led to the proposal of alternative risk measures, notably those penalizing only the downside part (adverse) and not the upside (potential). The downside risk considerations constitute the basis of the Post Modern Portfolio Theory (PMPT). Examples of such risk measures are lower partial moments, Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR). We revisit these risk measures and the resulting mean-risk models. We discuss alternative models for portfolio selection, their choice criteria and the evolution of MPT to PMPT which incorporates: utility maximization and stochastic dominance

    Optimization with multivariate conditional value-at-risk constraints

    Get PDF
    For many decision making problems under uncertainty, it is crucial to develop risk-averse models and specify the decision makers' risk preferences based on multiple stochastic performance measures (or criteria). Incorporating such multivariate preference rules into optimization models is a fairly recent research area. Existing studies focus on extending univariate stochastic dominance rules to the multivariate case. However, enforcing multivariate stochastic dominance constraints can often be overly conservative in practice. As an alternative, we focus on the widely-applied risk measure conditional value-at-risk (CVaR), introduce a multivariate CVaR relation, and develop a novel optimization model with multivariate CVaR constraints based on polyhedral scalarization. To solve such problems for finite probability spaces we develop a cut generation algorithm, where each cut is obtained by solving a mixed integer problem. We show that a multivariate CVaR constraint reduces to finitely many univariate CVaR constraints, which proves the finite convergence of our algorithm. We also show that our results can be naturally extended to a wider class of coherent risk measures. The proposed approach provides a flexible, and computationally tractable way of modeling preferences in stochastic multi-criteria decision making. We conduct a computational study for a budget allocation problem to illustrate the effect of enforcing multivariate CVaR constraints and demonstrate the computational performance of the proposed solution methods

    Resolution of the stochastic strategy spatial prisoner's dilemma by means of particle swarm optimization

    Get PDF
    We study the evolution of cooperation among selfish individuals in the stochastic strategy spatial prisoner's dilemma game. We equip players with the particle swarm optimization technique, and find that it may lead to highly cooperative states even if the temptations to defect are strong. The concept of particle swarm optimization was originally introduced within a simple model of social dynamics that can describe the formation of a swarm, i.e., analogous to a swarm of bees searching for a food source. Essentially, particle swarm optimization foresees changes in the velocity profile of each player, such that the best locations are targeted and eventually occupied. In our case, each player keeps track of the highest payoff attained within a local topological neighborhood and its individual highest payoff. Thus, players make use of their own memory that keeps score of the most profitable strategy in previous actions, as well as use of the knowledge gained by the swarm as a whole, to find the best available strategy for themselves and the society. Following extensive simulations of this setup, we find a significant increase in the level of cooperation for a wide range of parameters, and also a full resolution of the prisoner's dilemma. We also demonstrate extreme efficiency of the optimization algorithm when dealing with environments that strongly favor the proliferation of defection, which in turn suggests that swarming could be an important phenomenon by means of which cooperation can be sustained even under highly unfavorable conditions. We thus present an alternative way of understanding the evolution of cooperative behavior and its ubiquitous presence in nature, and we hope that this study will be inspirational for future efforts aimed in this direction.Comment: 12 pages, 4 figures; accepted for publication in PLoS ON
    corecore