2,361 research outputs found

    Absolute semi-deviation risk measure for ordering problem with transportation cost in Supply Chain

    Full text link
    We present a decomposition method for stochastic programs with 0-1 variables in the second-stage with absolute semi-deviation (ASD) risk measure. Traditional stochastic programming models are risk-neutral where expected costs are considered for the second-stage. A common approach to address risk is to include a dispersion statistic in addition with expected costs and weighted appropriately. Due to the lack of block angular structure, stochastic programs with ASD risk-measure possess computational challenges. The proposed decomposition algorithm uses another risk-measure `expected excess', and provides tighter bounds for ASD stochastic models. We perform computational study on a supply chain replenishment problem and standard knapsack instances. The computational results using supply chain instances demonstrate the usefulness of ASD risk-measure in decision making under uncertainty, and knapsack instances indicate that the proposed methodology outperforms a direct solver

    On conditional cuts for Stochastic Dual Dynamic Programming

    Full text link
    Multi stage stochastic programs arise in many applications from engineering whenever a set of inventories or stocks has to be valued. Such is the case in seasonal storage valuation of a set of cascaded reservoir chains in hydro management. A popular method is Stochastic Dual Dynamic Programming (SDDP), especially when the dimensionality of the problem is large and Dynamic programming no longer an option. The usual assumption of SDDP is that uncertainty is stage-wise independent, which is highly restrictive from a practical viewpoint. When possible, the usual remedy is to increase the state-space to account for some degree of dependency. In applications this may not be possible or it may increase the state space by too much. In this paper we present an alternative based on keeping a functional dependency in the SDDP - cuts related to the conditional expectations in the dynamic programming equations. Our method is based on popular methodology in mathematical finance, where it has progressively replaced scenario trees due to superior numerical performance. On a set of numerical examples, we too show the interest of this way of handling dependency in uncertainty, when combined with SDDP. Our method is readily available in the open source software package StOpt.Comment: 26 pages, 10 figure

    Risk Aversion to Parameter Uncertainty in Markov Decision Processes with an Application to Slow-Onset Disaster Relief

    Full text link
    In classical Markov Decision Processes (MDPs), action costs and transition probabilities are assumed to be known, although an accurate estimation of these parameters is often not possible in practice. This study addresses MDPs under cost and transition probability uncertainty and aims to provide a mathematical framework to obtain policies minimizing the risk of high long-term losses due to not knowing the true system parameters. To this end, we utilize the risk measure value-at-risk associated with the expected performance of an MDP model with respect to parameter uncertainty. We provide mixed-integer linear and nonlinear programming formulations and heuristic algorithms for such risk-averse models of MDPs under a finite distribution of the uncertain parameters. Our proposed models and solution methods are illustrated on an inventory management problem for humanitarian relief operations during a slow-onset disaster. The results demonstrate the potential of our risk-averse modeling approach for reducing the risk of highly undesirable outcomes in uncertain/risky environments

    Multicut decomposition methods with cut selection for multistage stochastic programs

    Full text link
    We introduce a variant of Multicut Decomposition Algorithms (MuDA), called CuSMuDA (Cut Selection for Multicut Decomposition Algorithms), for solving multistage stochastic linear programs that incorporates strategies to select the most relevant cuts of the approximate recourse functions. We prove the convergence of the method in a finite number of iterations and use it to solve six portfolio problems with direct transaction costs under return uncertainty and six inventory management problems under demand uncertainty. On all problem instances CuSMuDA is much quicker than MuDA: between 5.1 and 12.6 times quicker for the porfolio problems considered and between 6.4 and 15.7 times quicker for the inventory problems

    DASC: a Decomposition Algorithm for multistage stochastic programs with Strongly Convex cost functions

    Full text link
    We introduce DASC, a decomposition method akin to Stochastic Dual Dynamic Programming (SDDP) which solves some multistage stochastic optimization problems having strongly convex cost functions. Similarly to SDDP, DASC approximates cost-to-go functions by a maximum of lower bounding functions called cuts. However, contrary to SDDP, the cuts computed with DASC are quadratic functions. We also prove the convergence of DASC.Comment: arXiv admin note: text overlap with arXiv:1707.0081

    Single cut and multicut SDDP with cut selection for multistage stochastic linear programs: convergence proof and numerical experiments

    Full text link
    We introduce a variant of Multicut Decomposition Algorithms (MuDA), called CuSMuDA (Cut Selection for Multicut Decomposition Algorithms), for solving multistage stochastic linear programs that incorporates a class of cut selection strategies to choose the most relevant cuts of the approximate recourse functions. This class contains Level 1 and Limited Memory Level 1 cut selection strategies, initially introduced for respectively Stochastic Dual Dynamic Programming (SDDP) and Dual Dynamic Programming (DDP). We prove the almost sure convergence of the method in a finite number of iterations and obtain as a by-product the almost sure convergence in a finite number of iterations of SDDP combined with our class of cut selection strategies. We compare the performance of MuDA, SDDP, and their variants with cut selection (using Level 1 and Limited Memory Level 1) on several instances of a portfolio problem and of an inventory problem. On these experiments, in general, SDDP is quicker (i.e., satisfies the stopping criterion quicker) than MuDA and cut selection allows us to decrease the computational bulk with Limited Memory Level 1 being more efficient (sometimes much more) than Level 1.Comment: arXiv admin note: substantial text overlap with arXiv:1705.0897

    A Uniform-grid Discretization Algorithm for Stochastic Control with Risk Constraints

    Full text link
    In this paper, we present a discretization algorithm for finite horizon risk constrained dynamic programming algorithm in [Chow_Pavone_13]. Although in a theoretical standpoint, Bellman's recursion provides a systematic way to find optimal value functions and generate optimal history dependent policies, there is a serious computational issue. Even if the state space and action space of this constrained stochastic optimal control problem are finite, the spaces of risk threshold and the feasible risk update are closed bounded subset of real numbers. This prohibits any direct applications of unconstrained finite state iterative methods in dynamic programming found in [Bertsekas_05]. In order to approximate Bellman's operator derived in [Chow_Pavone_13], we discretize the continuous action spaces and formulate a finite space approximation for the exact dynamic programming algorithm. We will also prove that the approximation error bound of optimal value functions is bound linearly by the step size of discretization. Finally, details for implementations and possible modifications are discussed

    Risk-Averse Approximate Dynamic Programming with Quantile-Based Risk Measures

    Full text link
    In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose data-driven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of inefficient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the "risky region" as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.Comment: 39 pages, 7 figure

    Convergence analysis of sampling-based decomposition methods for risk-averse multistage stochastic convex programs

    Full text link
    We consider a class of sampling-based decomposition methods to solve risk-averse multistage stochastic convex programs. We prove a formula for the computation of the cuts necessary to build the outer linearizations of the recourse functions. This formula can be used to obtain an efficient implementation of Stochastic Dual Dynamic Programming applied to convex nonlinear problems. We prove the almost sure convergence of these decomposition methods when the relatively complete recourse assumption holds. We also prove the almost sure convergence of these algorithms when applied to risk-averse multistage stochastic linear programs that do not satisfy the relatively complete recourse assumption. The analysis is first done assuming the underlying stochastic process is interstage independent and discrete, with a finite set of possible realizations at each stage. We then indicate two ways of extending the methods and convergence analysis to the case when the process is interstage dependent

    Optimal Pump Control for Water Distribution Networks via Data-based Distributional Robustness

    Full text link
    In this paper, we propose a data-based methodology to solve a multi-period stochastic optimal water flow (OWF) problem for water distribution networks (WDNs). The framework explicitly considers the pump schedule and water network head level with limited information of demand forecast errors for an extended period simulation. The objective is to determine the optimal feedback decisions of network-connected components, such as nominal pump schedules and tank head levels and reserve policies, which specify device reactions to forecast errors for accommodation of fluctuating water demand. Instead of assuming the uncertainties across the water network are generated by a prescribed certain distribution, we consider ambiguity sets of distributions centered at an empirical distribution, which is based directly on a finite training data set. We use a distance-based ambiguity set with the Wasserstein metric to quantify the distance between the real unknown data-generating distribution and the empirical distribution. This allows our multi-period OWF framework to trade off system performance and inherent sampling errors in the training dataset. Case studies on a three-tank water distribution network systematically illustrate the tradeoff between pump operational cost, risks of constraint violation, and out-of-sample performance
    • …
    corecore