65,207 research outputs found

    Adaptive Two-stage Stochastic Programming with an Application to Capacity Expansion Planning

    Full text link
    Multi-stage stochastic programming is a well-established framework for sequential decision making under uncertainty by seeking policies that are fully adapted to the uncertainty. Often such flexible policies are not desirable, and the decision maker may need to commit to a set of actions for a number of planning periods. Two-stage stochastic programming might be better suited to such settings, where the decisions for all periods are made here-and-now and do not adapt to the uncertainty realized. In this paper, we propose a novel alternative approach, where the stages are not predetermined but part of the optimization problem. Each component of the decision policy has an associated revision point, a period prior to which the decision is predetermined and after which it is revised to adjust to the uncertainty realized thus far. We motivate this setting using the multi-period newsvendor problem by deriving an optimal adaptive policy. We label the proposed approach as adaptive two-stage stochastic programming and provide a generic mixed-integer programming formulation for finite stochastic processes. We show that adaptive two-stage stochastic programming is NP-hard in general. Next, we derive bounds on the value of adaptive two-stage programming in comparison to the two-stage and multi-stage approaches for a specific problem structure inspired by the capacity expansion planning problem. Since directly solving the mixed-integer linear program associated with the adaptive two-stage approach might be very costly for large instances, we propose several heuristic solution algorithms based on the bound analysis. We provide approximation guarantees for these heuristics. Finally, we present an extensive computational study on an electricity generation capacity expansion planning problem and demonstrate the computational and practical impacts of the proposed approach from various perspectives

    Exploiting Anonymity in Approximate Linear Programming: Scaling to Large Multiagent MDPs (Extended Version)

    Get PDF
    Many exact and approximate solution methods for Markov Decision Processes (MDPs) attempt to exploit structure in the problem and are based on factorization of the value function. Especially multiagent settings, however, are known to suffer from an exponential increase in value component sizes as interactions become denser, meaning that approximation architectures are restricted in the problem sizes and types they can handle. We present an approach to mitigate this limitation for certain types of multiagent systems, exploiting a property that can be thought of as "anonymous influence" in the factored MDP. Anonymous influence summarizes joint variable effects efficiently whenever the explicit representation of variable identity in the problem can be avoided. We show how representational benefits from anonymity translate into computational efficiencies, both for general variable elimination in a factor graph but in particular also for the approximate linear programming solution to factored MDPs. The latter allows to scale linear programming to factored MDPs that were previously unsolvable. Our results are shown for the control of a stochastic disease process over a densely connected graph with 50 nodes and 25 agents.Comment: Extended version of AAAI 2016 pape

    Stochastic single machine scheduling problem as a multi-stage dynamic random decision process

    Get PDF
    In this work, we study a stochastic single machine scheduling problem in which the features of learning effect on processing times, sequence-dependent setup times, and machine configuration selection are considered simultaneously. More precisely, the machine works under a set of configurations and requires stochastic sequence-dependent setup times to switch from one configuration to another. Also, the stochastic processing time of a job is a function of its position and the machine configuration. The objective is to find the sequence of jobs and choose a configuration to process each job to minimize the makespan. We first show that the proposed problem can be formulated through two-stage and multi-stage Stochastic Programming models, which are challenging from the computational point of view. Then, by looking at the problem as a multi-stage dynamic random decision process, a new deterministic approximation-based formulation is developed. The method first derives a mixed-integer non-linear model based on the concept of accessibility to all possible and available alternatives at each stage of the decision-making process. Then, to efficiently solve the problem, a new accessibility measure is defined to convert the model into the search of a shortest path throughout the stages. Extensive computational experiments are carried out on various sets of instances. We discuss and compare the results found by the resolution of plain stochastic models with those obtained by the deterministic approximation approach. Our approximation shows excellent performances both in terms of solution accuracy and computational time

    Mitigating Uncertainty via Compromise Decisions in Two-stage Stochastic Linear Programming

    Get PDF
    Stochastic Programming (SP) has long been considered as a well-justified yet computationally challenging paradigm for practical applications. Computational studies in the literature often involve approximating a large number of scenarios by using a small number of scenarios to be processed via deterministic solvers, or running Sample Average Approximation on some genre of high performance machines so that statistically acceptable bounds can be obtained. In this paper we show that for a class of stochastic linear programming problems, an alternative approach known as Stochastic Decomposition (SD) can provide solutions of similar quality, in far less computational time using ordinary desktop or laptop machines of today. In addition to these compelling computational results, we also provide a stronger convergence result for SD, and introduce a new solution concept which we refer to as the compromise decision. This new concept is attractive for algorithms which call for multiple replications in sampling-based convex optimization algorithms. For such replicated optimization, we show that the difference between an average solution and a compromise decision provides a natural stopping rule. Finally our computational results cover a variety of instances from the literature, including a detailed study of SSN, a network planning instance which is known to be more challenging than other test instances in the literature

    The Decision Rule Approach to Optimisation under Uncertainty: Theory and Applications

    No full text
    Optimisation under uncertainty has a long and distinguished history in operations research. Decision-makers realised early on that the failure to account for uncertainty in optimisation problems can lead to substantial unexpected losses or even infeasible solutions. Therefore, approximating the uncertain parameters by their average or nominal values may result in decisions that perform poorly in scenarios that deviate from the average. For the last sixty years, scenario tree-based stochastic programming has been the method of choice for solving optimisation problems affected by parameter uncertainty. This method approximates the random problem parameters by finite scenarios that can be arranged as a tree. Unfortunately, this approximation suffers from a curse of dimensionality: the tree needs to branch whenever new uncertainties are revealed, and thus its size grows exponentially with the number of decision stages. It has recently been argued that stochastic programs can quite generally be made tractable by restricting the space of recourse decisions to those that exhibit a linear data dependence. An attractive feature of this linear decision rule approximation is that it typically leads to polynomial-time solution schemes. Unfortunately, the simple structure of linear decision rules sacrifices optimality in return for scalability. The worst-case performance of linear decision rules is in fact rather disappointing. When applied to two-stage robust optimisation problems with m linear constraints, the underlying worst-case approximation ratio has been shown to be of the order O(√m). Therefore, in this thesis we endeavour to construct efficiently computable instance-wise bounds on the loss of optimality incurred by the linear decision rule approximation. The contributions of this thesis are as follows. (i)We develop an efficient algorithm for assessing the loss of optimality incurred by the linear decision rule approximation. The key idea is to apply the linear decision rule restriction not only to the primal but also to a dual version of the stochastic program. Since both problems share a similar structure, both problems can be solved in polynomial-time. The gap between their optimal values estimates the loss of optimality incurred by the linear decision rule approximation. (ii) We design an improved approximation based on non-linear decision rules, which can be useful if the optimality gap of the linear decision rules is deemed unacceptably high. The idea takes advantage of the fact that one can always map a linearly parameterised non-linear function into a higher dimensional space, where it can be represented as a linear function. This allows us to utilise the machinery developed for linear decision rules to produce superior quality approximations that can be obtained in polynomial time. (iii) We assess the performance of the approximations developed in two operations management problems: a production planning problem and a supply chain design problem. We show that near-optimal solutions can be found in problem instances with many stages and random parameters. We additionally compare the quality of the decision rule approximation with classical approximation techniques. (iv) We develop a systematic approach to reformulate multi-stage stochastic programs with a large (possibly infinite) number of stages as static robust optimisation problem that can be solved with a constraint sampling technique. The method is motivated via an investment planning problem in the electricity industry
    • …
    corecore