23 research outputs found
DASC: a Decomposition Algorithm for multistage stochastic programs with Strongly Convex cost functions
We introduce DASC, a decomposition method akin to Stochastic Dual Dynamic
Programming (SDDP) which solves some multistage stochastic optimization
problems having strongly convex cost functions. Similarly to SDDP, DASC
approximates cost-to-go functions by a maximum of lower bounding functions
called cuts. However, contrary to SDDP, the cuts computed with DASC are
quadratic functions. We also prove the convergence of DASC.Comment: arXiv admin note: text overlap with arXiv:1707.0081
Dual Dynamic Programming with cut selection: convergence proof and numerical experiments
We consider convex optimization problems formulated using dynamic programming
equations. Such problems can be solved using the Dual Dynamic Programming
algorithm combined with the Level 1 cut selection strategy or the Territory
algorithm to select the most relevant Benders cuts. We propose a limited memory
variant of Level 1 and show the convergence of DDP combined with the Territory
algorithm, Level 1 or its variant for nonlinear optimization problems. In the
special case of linear programs, we show convergence in a finite number of
iterations. Numerical simulations illustrate the interest of our variant and
show that it can be much quicker than a simplex algorithm on some large
instances of portfolio selection and inventory problems
Foundations of Multistage Stochastic Programming
Multistage stochastic optimization problems are oftentimes formulated
informally in a pathwise way. These are correct in a discrete setting and
suitable when addressing computational challenges, for example. But the
pathwise problem statement does not allow an analysis with mathematical rigor
and is therefore not appropriate.
This paper addresses the foundations. We provide a novel formulation of
multistage stochastic optimization problems by involving adequate stochastic
processes as control. The fundamental contribution is a proof that there exist
measurable versions of intermediate value functions. Our proof builds on the
Kolmogorov continuity theorem.
A verification theorem is given in addition, and it is demonstrated that all
traditional problem specifications can be stated in the novel setting with
mathematical rigor. Further, we provide dynamic equations for the general
problem, which is developed for various problem classes. The problem classes
covered here include Markov decision processes, reinforcement learning and
stochastic dual dynamic programming
Multicut decomposition methods with cut selection for multistage stochastic programs
We introduce a variant of Multicut Decomposition Algorithms (MuDA), called
CuSMuDA (Cut Selection for Multicut Decomposition Algorithms), for solving
multistage stochastic linear programs that incorporates strategies to select
the most relevant cuts of the approximate recourse functions. We prove the
convergence of the method in a finite number of iterations and use it to solve
six portfolio problems with direct transaction costs under return uncertainty
and six inventory management problems under demand uncertainty. On all problem
instances CuSMuDA is much quicker than MuDA: between 5.1 and 12.6 times quicker
for the porfolio problems considered and between 6.4 and 15.7 times quicker for
the inventory problems
A Composite Risk Measure Framework for Decision Making under Uncertainty
In this paper, we present a unified framework for decision making under
uncertainty. Our framework is based on the composite of two risk measures,
where the inner risk measure accounts for the risk of decision given the exact
distribution of uncertain model parameters, and the outer risk measure
quantifies the risk that occurs when estimating the parameters of distribution.
We show that the model is tractable under mild conditions. The framework is a
generalization of several existing models, including stochastic programming,
robust optimization, distributionally robust optimization, etc. Using this
framework, we study a few new models which imply probabilistic guarantees for
solutions and yield less conservative results comparing to traditional models.
Numerical experiments are performed on portfolio selection problems to
demonstrate the strength of our models
Risk Neutral Reformulation Approach to Risk Averse Stochastic Programming
The aim of this paper is to show that in some cases risk averse multistage
stochastic programming problems can be reformulated in a form of risk neutral
setting. This is achieved by a change of the reference probability measure
making ``bad" (extreme) scenarios more frequent. As a numerical example we
demonstrate advantages of such change-of-measure approach applied to the
Brazilian Interconnected Power System operation planning problem
Inexact cuts in Deterministic and Stochastic Dual Dynamic Programming applied to linear optimization problems
We introduce an extension of Dual Dynamic Programming (DDP) to solve linear
dynamic programming equations. We call this extension IDDP-LP which applies to
situations where some or all primal and dual subproblems to be solved along the
iterations of the method are solved with a bounded error (inexactly). We
provide convergence theorems both in the case when errors are bounded and for
asymptotically vanishing errors. We extend the analysis to stochastic linear
dynamic programming equations, introducing Inexact Stochastic Dual Dynamic
Programming for linear programs (ISDDP-LP), an inexact variant of SDDP applied
to linear programs corresponding to the situation where some or all problems to
be solved in the forward and backward passes of SDDP are solved approximately.
We also provide convergence theorems for ISDDP-LP for bounded and
asymptotically vanishing errors. Finally, we present the results of numerical
experiments comparing SDDP and ISSDP-LP on a portfolio problem with direct
transation costs modelled as a multistage stochastic linear optimization
problem. On these experiments, ISDDP-LP allows us to obtain a good policy
faster than SDDP
Convergence analysis of sampling-based decomposition methods for risk-averse multistage stochastic convex programs
We consider a class of sampling-based decomposition methods to solve
risk-averse multistage stochastic convex programs. We prove a formula for the
computation of the cuts necessary to build the outer linearizations of the
recourse functions. This formula can be used to obtain an efficient
implementation of Stochastic Dual Dynamic Programming applied to convex
nonlinear problems. We prove the almost sure convergence of these decomposition
methods when the relatively complete recourse assumption holds. We also prove
the almost sure convergence of these algorithms when applied to risk-averse
multistage stochastic linear programs that do not satisfy the relatively
complete recourse assumption. The analysis is first done assuming the
underlying stochastic process is interstage independent and discrete, with a
finite set of possible realizations at each stage. We then indicate two ways of
extending the methods and convergence analysis to the case when the process is
interstage dependent
A Moment and Sum-of-Squares Extension of Dual Dynamic Programming with Application to Nonlinear Energy Storage Problems
We present a finite-horizon optimization algorithm that extends the
established concept of Dual Dynamic Programming (DDP) in two ways. First, in
contrast to the linear costs, dynamics, and constraints of standard DDP, we
consider problems in which all of these can be polynomial functions. Second, we
allow the state trajectory to be described by probability distributions rather
than point values, and return approximate value functions fitted to these. The
algorithm is in part an adaptation of sum-of-squares techniques used in the
approximate dynamic programming literature. It alternates between a forward
simulation through the horizon, in which the moments of the state distribution
are propagated through a succession of single-stage problems, and a backward
recursion, in which a new polynomial function is derived for each stage using
the moments of the state as fixed data. The value function approximation
returned for a given stage is the point-wise maximum of all polynomials derived
for that stage. This contrasts with the piecewise affine functions derived in
conventional DDP. We prove key convergence properties of the new algorithm, and
validate it in simulation on two case studies related to the optimal operation
of energy storage devices with nonlinear characteristics. The first is a small
borehole storage problem, for which multiple value function approximations can
be compared. The second is a larger problem, for which conventional discretized
dynamic programming is intractable.Comment: 33 pages, 9 figure
Single cut and multicut SDDP with cut selection for multistage stochastic linear programs: convergence proof and numerical experiments
We introduce a variant of Multicut Decomposition Algorithms (MuDA), called
CuSMuDA (Cut Selection for Multicut Decomposition Algorithms), for solving
multistage stochastic linear programs that incorporates a class of cut
selection strategies to choose the most relevant cuts of the approximate
recourse functions. This class contains Level 1 and Limited Memory Level 1 cut
selection strategies, initially introduced for respectively Stochastic Dual
Dynamic Programming (SDDP) and Dual Dynamic Programming (DDP). We prove the
almost sure convergence of the method in a finite number of iterations and
obtain as a by-product the almost sure convergence in a finite number of
iterations of SDDP combined with our class of cut selection strategies. We
compare the performance of MuDA, SDDP, and their variants with cut selection
(using Level 1 and Limited Memory Level 1) on several instances of a portfolio
problem and of an inventory problem. On these experiments, in general, SDDP is
quicker (i.e., satisfies the stopping criterion quicker) than MuDA and cut
selection allows us to decrease the computational bulk with Limited Memory
Level 1 being more efficient (sometimes much more) than Level 1.Comment: arXiv admin note: substantial text overlap with arXiv:1705.0897