150 research outputs found

    A Practical Guide to Robust Optimization

    Get PDF
    Robust optimization is a young and active research field that has been mainly developed in the last 15 years. Robust optimization is very useful for practice, since it is tailored to the information at hand, and it leads to computationally tractable formulations. It is therefore remarkable that real-life applications of robust optimization are still lagging behind; there is much more potential for real-life applications than has been exploited hitherto. The aim of this paper is to help practitioners to understand robust optimization and to successfully apply it in practice. We provide a brief introduction to robust optimization, and also describe important do's and don'ts for using it in practice. We use many small examples to illustrate our discussions

    Reducing conservatism in robust optimization

    Get PDF

    Nested Maximin Latin Hypercube Designs

    Get PDF
    In the field of design of computer experiments (DoCE), Latin hypercube designs are frequently used for the approximation and optimization of black-boxes. In certain situations, we need a special type of designs consisting of two separate designs, one being a subset of the other. These nested designs can be used to deal with training and test sets, models with different levels of accuracy, linking parameters, and sequential evaluations. In this paper, we construct nested maximin Latin hypercube designs for up to ten dimensions. We show that different types of grids should be considered when constructing nested designs and discuss how to determine which grid to use for a specific application. To determine nested maximin designs for dimensions higher than two, four different variants of the ESE-algorithm of Jin et al. (2005) are introduced and compared. In the appendix, maximin distances for different numbers of points are provided; the corresponding nested maximin designs can be found on the website http://www.spacefillingdesigns.nl.

    Optimization with Constraint Learning: A Framework and Survey

    Full text link
    Many real-life optimization problems frequently contain one or more constraints or objectives for which there are no explicit formulas. If data is however available, these data can be used to learn the constraints. The benefits of this approach are clearly seen, however there is a need for this process to be carried out in a structured manner. This paper therefore provides a framework for Optimization with Constraint Learning (OCL) which we believe will help to formalize and direct the process of learning constraints from data. This framework includes the following steps: (i) setup of the conceptual optimization model, (ii) data gathering and preprocessing, (iii) selection and training of predictive models, (iv) resolution of the optimization model, and (v) verification and improvement of the optimization model. We then review the recent OCL literature in light of this framework, and highlight current trends, as well as areas for future research

    The impact of the existence of multiple adjustable robust solutions

    Get PDF
    In this note we show that multiple solutions exist for the production-inventory example in the seminal paper on adjustable robust optimization in Ben-Tal et al. (Math Program 99(2):351–376, 2004). All these optimal robust solutions have the same worst-case objective value, but the mean objective values differ up to 21.9 % and for individual realizations this difference can be up to 59.4 %. We show via additional experiments that these differences in performance become negligible when using a folding horizon approach. The aim of this paper is to convince users of adjustable robust optimization to check for existence of multiple solutions. Using the production-inventory example and an illustrative toy example we deduce three important implications of the existence of multiple optimal robust solutions. First, if one neglects this existence of multiple solutions, then one can wrongly conclude that the adjustable robust solution does not outperform the nonadjustable robust solution. Second, even when it is a priori known that the adjustable and nonadjustable robust solutions are equivalent on worst-case objective value, they might still differ on the mean objective value. Third, even if it is known that affine decision rules yield (near) optimal performance in the adjustable robust optimization setting, then still nonlinear decision rules can yield much better mean objective values

    Better Routing in Developing Regions:Weather and Satellite-Informed Road Speed Prediction

    Get PDF
    Inaccurate digital road networks significantly complicate the use of analytics in developing, data scarce, environments. For routing purposes, the most important characteristic of a digital road network is the information about travel times/speeds of roads. In developing regions, these are often unknown, and heavily dependent on the weather (e.g., rainfall). This may, for instance, cause vehicles to experience longer travel times than expected. Current methods to predict the travel speeds are designed for the short upcoming period (minutes or hours). They make use of data about the position of the vehicle, the average speed on a given road (section), or patterns of trafic flow in certain periods, which are typically not available in more developing regions. This paper presents a novel deep learning method that predicts the travel speeds for all roads in a data scarce environment using GPS trajectory data and open-source satellite imagery. The method is capable of predicting speeds for previously unobserved roads and incorporates specific circumstances, which are characterized by the time of the day and the rainfall during the last hour. In collaboration with the organization PemPem, we perform a case study in which we show that our proposed procedure predicts the average travel speed of roads in the area (that may not exist in the GPS trajectory data) with an average RMSE of 8.5 km/h

    A universal and structured way to derive dual optimization problem formulations

    Get PDF
    The dual problem of a convex optimization problem can be obtained in a relatively simple and structural way by using a well-known result in convex analysis, namely Fenchel’s duality theorem. This alternative way of forming a strong dual problem is the subject of this paper. We recall some standard results from convex analysis and then discuss how the dual problem can be written in terms of the conjugates of the objective function and the constraint functions. This is a didactically valuable method to explicitly write the dual problem. We demonstrate the method by deriving dual problems for several classical problems and also for a practical model for radiotherapy treatment planning, for which deriving the dual problem using other methods is a more tedious task. Additional material is presented in the appendices, including useful tables for finding conjugate functions of many functions
    • …
    corecore