14,833 research outputs found

    Optimization with multivariate conditional value-at-risk constraints

    Get PDF
    For many decision making problems under uncertainty, it is crucial to develop risk-averse models and specify the decision makers' risk preferences based on multiple stochastic performance measures (or criteria). Incorporating such multivariate preference rules into optimization models is a fairly recent research area. Existing studies focus on extending univariate stochastic dominance rules to the multivariate case. However, enforcing multivariate stochastic dominance constraints can often be overly conservative in practice. As an alternative, we focus on the widely-applied risk measure conditional value-at-risk (CVaR), introduce a multivariate CVaR relation, and develop a novel optimization model with multivariate CVaR constraints based on polyhedral scalarization. To solve such problems for finite probability spaces we develop a cut generation algorithm, where each cut is obtained by solving a mixed integer problem. We show that a multivariate CVaR constraint reduces to finitely many univariate CVaR constraints, which proves the finite convergence of our algorithm. We also show that our results can be naturally extended to a wider class of coherent risk measures. The proposed approach provides a flexible, and computationally tractable way of modeling preferences in stochastic multi-criteria decision making. We conduct a computational study for a budget allocation problem to illustrate the effect of enforcing multivariate CVaR constraints and demonstrate the computational performance of the proposed solution methods

    A polynomial-time algorithm for optimizing over N-fold 4-block decomposable integer programs

    Full text link
    In this paper we generalize N-fold integer programs and two-stage integer programs with N scenarios to N-fold 4-block decomposable integer programs. We show that for fixed blocks but variable N, these integer programs are polynomial-time solvable for any linear objective. Moreover, we present a polynomial-time computable optimality certificate for the case of fixed blocks, variable N and any convex separable objective function. We conclude with two sample applications, stochastic integer programs with second-order dominance constraints and stochastic integer multi-commodity flows, which (for fixed blocks) can be solved in polynomial time in the number of scenarios and commodities and in the binary encoding length of the input data. In the proof of our main theorem we combine several non-trivial constructions from the theory of Graver bases. We are confident that our approach paves the way for further extensions

    Stochastic Dominance Efficiency Tests under Diversification

    Get PDF
    This paper focuses on Stochastic Dominance (SD) efficiency in a finite empirical panel data. We analytically characterize the sets of unsorted time series that dominate a given evaluated distribution by the First, Second, and Third order SD. Using these insights, we develop simple Linear Programming and 0-1 Mixed Integer Linear Programming tests of SD efficiency. The advantage to the earlier efficiency tests is that the proposed approach explicitly accounts for diversification. Allowing for diversification can both improve the power of the empirical SD tests, and enable SD based portfolio optimization. A simple numerical example illustrates the SD efficiency tests. Discussion on the application potential and the future research directions concludes.Stochastic Dominance, Protfolio Choice, Efficiency, Diversification, Mathematical Programming

    Optimization with multivariate conditional value-at-risk constraints

    Get PDF
    For many decision making problems under uncertainty, it is crucial to develop risk-averse models and specify the decision makers' risk preferences based on multiple stochastic performance measures (or criteria). Incorporating such multivariate preference rules into optimization models is a fairly recent research area. Existing studies focus on extending univariate stochastic dominance rules to the multivariate case. However, enforcing multivariate stochastic dominance constraints can often be overly conservative in practice. As an alternative, we focus on the widely-applied risk measure conditional value-at-risk (CVaR), introduce a multivariate CVaR relation, and develop a novel optimization model with multivariate CVaR constraints based on polyhedral scalarization. To solve such problems for finite probability spaces we develop a cut generation algorithm, where each cut is obtained by solving a mixed integer problem. We show that a multivariate CVaR constraint reduces to finitely many univariate CVaR constraints, which proves the finite convergence of our algorithm. We also show that our results can be naturally extended to a wider class of coherent risk measures. The proposed approach provides a flexible, and computationally tractable way of modeling preferences in stochastic multi-criteria decision making. We conduct a computational study for a budget allocation problem to illustrate the effect of enforcing multivariate CVaR constraints and demonstrate the computational performance of the proposed solution methods

    Have Econometric Analyses of Happiness Data Been Futile? A Simple Truth About Happiness Scales

    Full text link
    Econometric analyses in the happiness literature typically use subjective well-being (SWB) data to compare the mean of observed or latent happiness across samples. Recent critiques show that comparing the mean of ordinal data is only valid under strong assumptions that are usually rejected by SWB data. This leads to an open question whether much of the empirical studies in the economics of happiness literature have been futile. In order to salvage some of the prior results and avoid future issues, we suggest regression analysis of SWB (and other ordinal data) should focus on the median rather than the mean. Median comparisons using parametric models such as the ordered probit and logit can be readily carried out using familiar statistical softwares like STATA. We also show a previously assumed impractical task of estimating a semiparametric median ordered-response model is also possible by using a novel constrained mixed integer optimization technique. We use GSS data to show the famous Easterlin Paradox from the happiness literature holds for the US independent of any parametric assumption

    Spanning Tests for Markowitz Stochastic Dominance

    Full text link
    We derive properties of the cdf of random variables defined as saddle-type points of real valued continuous stochastic processes. This facilitates the derivation of the first-order asymptotic properties of tests for stochastic spanning given some stochastic dominance relation. We define the concept of Markowitz stochastic dominance spanning, and develop an analytical representation of the spanning property. We construct a non-parametric test for spanning based on subsampling, and derive its asymptotic exactness and consistency. The spanning methodology determines whether introducing new securities or relaxing investment constraints improves the investment opportunity set of investors driven by Markowitz stochastic dominance. In an application to standard data sets of historical stock market returns, we reject market portfolio Markowitz efficiency as well as two-fund separation. Hence, we find evidence that equity management through base assets can outperform the market, for investors with Markowitz type preferences

    Minimizing value-at-risk in the single-machine total weighted tardiness problem

    Get PDF
    The vast majority of the machine scheduling literature focuses on deterministic problems, in which all data is known with certainty a priori. This may be a reasonable assumption when the variability in the problem parameters is low. However, as variability in the parameters increases incorporating this uncertainty explicitly into a scheduling model is essential to mitigate the resulting adverse effects. In this paper, we consider the celebrated single-machine total weighted tardiness (TWT) problem in the presence of uncertain problem parameters. We impose a probabilistic constraint on the random TWT and introduce a risk-averse stochastic programming model. In particular, the objective of the proposed model is to find a non-preemptive static job processing sequence that minimizes the value-at-risk (VaR) measure on the random TWT at a specified confidence level. Furthermore, we develop a lower bound on the optimal VaR that may also benefit alternate solution approaches in the future. In this study, we implement a tabu-search heuristic to obtain reasonably good feasible solutions and present results to demonstrate the effect of the risk parameter and the value of the proposed model with respect to a corresponding risk-neutral approach

    Constraint handling strategies in Genetic Algorithms application to optimal batch plant design

    Get PDF
    Optimal batch plant design is a recurrent issue in Process Engineering, which can be formulated as a Mixed Integer Non-Linear Programming(MINLP) optimisation problem involving specific constraints, which can be, typically, the respect of a time horizon for the synthesis of various products. Genetic Algorithms constitute a common option for the solution of these problems, but their basic operating mode is not always wellsuited to any kind of constraint treatment: if those cannot be integrated in variable encoding or accounted for through adapted genetic operators, their handling turns to be a thorny issue. The point of this study is thus to test a few constraint handling techniques on a mid-size example in order to determine which one is the best fitted, in the framework of one particular problem formulation. The investigated methods are the elimination of infeasible individuals, the use of a penalty term added in the minimized criterion, the relaxation of the discrete variables upper bounds, dominancebased tournaments and, finally, a multiobjective strategy. The numerical computations, analysed in terms of result quality and of computational time, show the superiority of elimination technique for the former criterion only when the latter one does not become a bottleneck. Besides, when the problem complexity makes the random location of feasible space too difficult, a single tournament technique proves to be the most efficient one
    corecore