1,015 research outputs found

    Safe Approximations of Chance Constraints Using Historical Data

    Get PDF
    This paper proposes a new way to construct uncertainty sets for robust optimization. Our approach uses the available historical data for the uncertain parameters and is based on goodness-of-fit statistics. It guarantees that the probability that the uncertain constraint holds is at least the prescribed value. Compared to existing safe approximation methods for chance constraints, our approach directly uses the historical-data information and leads to tighter uncertainty sets and therefore to better objective values. This improvement is significant especially when the number of uncertain parameters is low. Other advantages of our approach are that it can handle joint chance constraints easily, it can deal with uncertain parameters that are dependent, and it can be extended to nonlinear inequalities. Several numerical examples illustrate the validity of our approach.robust optimization;chance constraint;phi-divergence;goodness-of-fit statistics

    Robust Counterparts of Inequalities Containing Sums of Maxima of Linear Functions

    Get PDF
    This paper adresses the robust counterparts of optimization problems containing sums of maxima of linear functions and proposes several reformulations. These problems include many practical problems, e.g. problems with sums of absolute values, and arise when taking the robust counterpart of a linear inequality that is affine in the decision variables, affine in a parameter with box uncertainty, and affine in a parameter with general uncertainty. In the literature, often the reformulation that is exact when there is no uncertainty is used. However, in robust optimization this reformulation gives an inferior solution and provides a pessimistic view. We observe that in many papers this conservatism is not mentioned. Some papers have recognized this problem, but existing solutions are either too conservative or their performance for different uncertainty regions is not known, a comparison between them is not available, and they are restricted to specific problems. We provide techniques for general problems and compare them with numerical examples in inventory management, regression and brachytherapy. Based on these examples, we give tractable recommendations for reducing the conservatism.robust optimization;sum of maxima of linear functions;biaffine uncertainty;robust conic quadratic constraints

    On Markov Chains with Uncertain Data

    Get PDF
    In this paper, a general method is described to determine uncertainty intervals for performance measures of Markov chains given an uncertainty region for the parameters of the Markov chains. We investigate the effects of uncertainties in the transition probabilities on the limiting distributions, on the state probabilities after n steps, on mean sojourn times in transient states, and on absorption probabilities for absorbing states. We show that the uncertainty effects can be calculated by solving linear programming problems in the case of interval uncertainty for the transition probabilities, and by second order cone optimization in the case of ellipsoidal uncertainty. Many examples are given, especially Markovian queueing examples, to illustrate the theory.Markov chain;Interval uncertainty;Ellipsoidal uncertainty;Linear Programming;Second Order Cone Optimization

    Immunizing Conic Quadratic Optimization Problems Against Implementation Errors

    Get PDF
    We show that the robust counterpart of a convex quadratic constraint with ellipsoidal implementation error is equivalent to a system of conic quadratic constraints. To prove this result we first derive a sharper result for the S-lemma in case the two matrices involved can be simultaneously diagonalized. This extension of the S-lemma may also be useful for other purposes. We extend the result to the case in which the uncertainty region is the intersection of two convex quadratic inequalities. The robust counterpart for this case is also equivalent to a system of conic quadratic constraints. Results for convex conic quadratic constraints with implementation error are also given. We conclude with showing how the theory developed can be applied in robust linear optimization with jointly uncertain parameters and implementation errors, in sequential robust quadratic programming, in Taguchi’s robust approach, and in the adjustable robust counterpart.Conic Quadratic Program;hidden convexity;implementation error;robust optimization;simultaneous diagonalizability;S-lemma

    Response Surface Methodology's Steepest Ascent and Step Size Revisited

    Get PDF
    Response Surface Methodology (RSM) searches for the input combination maximizing the output of a real system or its simulation.RSM is a heuristic that locally fits first-order polynomials, and estimates the corresponding steepest ascent (SA) paths.However, SA is scale-dependent; and its step size is selected intuitively.To tackle these two problems, this paper derives novel techniques combining mathematical statistics and mathematical programming.Technique 1 called 'adapted' SA (ASA) accounts for the covariances between the components of the estimated local gradient.ASA is scale-independent.The step-size problem is solved tentatively.Technique 2 does follow the SA direction, but with a step size inspired by ASA.Mathematical properties of the two techniques are derived and interpreted; numerical examples illustrate these properties.The search directions of the two techniques are explored in Monte Carlo experiments.These experiments show that - in general - ASA gives a better search direction than SA.response surface methodology

    The Effect of Transformations on the Approximation of Univariate (Convex) Functions with Applications to Pareto Curves

    Get PDF
    In the literature, methods for the construction of piecewise linear upper and lower bounds for the approximation of univariate convex functions have been proposed.We study the effect of the use of increasing convex or increasing concave transformations on the approximation of univariate (convex) functions.In this paper, we show that these transformations can be used to construct upper and lower bounds for nonconvex functions.Moreover, we show that by using such transformations of the input variable or the output variable, we obtain tighter upper and lower bounds for the approximation of convex functions than without these approximations.We show that these transformations can be applied to the approximation of a (convex) Pareto curve that is associated with a (convex) bi-objective optimization problem.approximation theory;convexity;convex/concave transformation;Pareto curve
    corecore