101 research outputs found

    A decomposition algorithm for robust lot sizing problem with remanufacturing option

    Get PDF
    In this paper, we propose a decomposition procedure for constructing robust optimal production plans for reverse inventory systems. Our method is motivated by the need of overcoming the excessive computational time requirements, as well as the inaccuracies caused by imprecise representations of problem parameters. The method is based on a min-max formulation that avoids the excessive conservatism of the dualization technique employed by Wei et al. (2011). We perform a computational study using our decomposition framework on several classes of computer generated test instances and we report our experience. Bienstock and Özbay (2008) computed optimal base stock levels for the traditional lot sizing problem when the production cost is linear and we extend this work here by considering return inventories and setup costs for production. We use the approach of Bertsimas and Sim (2004) to model the uncertainties in the input

    A biologically inspired network design model

    Get PDF
    A network design problem is to select a subset of links in a transport network that satisfy passengers or cargo transportation demands while minimizing the overall costs of the transportation. We propose a mathematical model of the foraging behaviour of slime mould P. polycephalum to solve the network design problem and construct optimal transport networks. In our algorithm, a traffic flow between any two cities is estimated using a gravity model. The flow is imitated by the model of the slime mould. The algorithm model converges to a steady state, which represents a solution of the problem. We validate our approach on examples of major transport networks in Mexico and China. By comparing networks developed in our approach with the man-made highways, networks developed by the slime mould, and a cellular automata model inspired by slime mould, we demonstrate the flexibility and efficiency of our approach

    Robust and Pareto Optimality of Insurance Contract

    Get PDF
    The optimal insurance problem represents a fast growing topic that explains the most efficient contract that an insurance player may get. The classical problem investigates the ideal contract under the assumption that the underlying risk distribution is known, i.e. by ignoring the parameter and model risks. Taking these sources of risk into account, the decision-maker aims to identify a robust optimal contract that is not sensitive to the chosen risk distribution. We focus on Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR)-based decisions, but further extensions for other risk measures are easily possible. The Worst-case scenario and Worst-case regret robust models are discussed in this paper, which have been already used in robust optimisation literature related to the investment portfolio problem. Closed-form solutions are obtained for the VaR Worst-case scenario case, while Linear Programming (LP) formulations are provided for all other cases. A caveat of robust optimisation is that the optimal solution may not be unique, and therefore, it may not be economically acceptable, i.e. Pareto optimal. This issue is numerically addressed and simple numerical methods are found for constructing insurance contracts that are Pareto and robust optimal. Our numerical illustrations show weak evidence in favour of our robust solutions for VaR-decisions, while our robust methods are clearly preferred for CVaR-based decisions

    Research trends in combinatorial optimization

    Get PDF
    Acknowledgments This work has been partially funded by the Spanish Ministry of Science, Innovation, and Universities through the project COGDRIVE (DPI2017-86915-C3-3-R). In this context, we would also like to thank the Karlsruhe Institute of Technology. Open access funding enabled and organized by Projekt DEAL.Peer reviewedPublisher PD

    The decision rule approach to optimization under uncertainty: methodology and applications

    Get PDF
    Dynamic decision-making under uncertainty has a long and distinguished history in operations research. Due to the curse of dimensionality, solution schemes that naïvely partition or discretize the support of the random problem parameters are limited to small and medium-sized problems, or they require restrictive modeling assumptions (e.g., absence of recourse actions). In the last few decades, several solution techniques have been proposed that aim to alleviate the curse of dimensionality. Amongst these is the decision rule approach, which faithfully models the random process and instead approximates the feasible region of the decision problem. In this paper, we survey the major theoretical findings relating to this approach, and we investigate its potential in two applications areas

    Study the problem of performance evaluation of Earth observation satellite mission planning

    No full text
    corecore