815,724 research outputs found
Approximate Dynamic Programming via Sum of Squares Programming
We describe an approximate dynamic programming method for stochastic control
problems on infinite state and input spaces. The optimal value function is
approximated by a linear combination of basis functions with coefficients as
decision variables. By relaxing the Bellman equation to an inequality, one
obtains a linear program in the basis coefficients with an infinite set of
constraints. We show that a recently introduced method, which obtains convex
quadratic value function approximations, can be extended to higher order
polynomial approximations via sum of squares programming techniques. An
approximate value function can then be computed offline by solving a
semidefinite program, without having to sample the infinite constraint. The
policy is evaluated online by solving a polynomial optimization problem, which
also turns out to be convex in some cases. We experimentally validate the
method on an autonomous helicopter testbed using a 10-dimensional helicopter
model.Comment: 7 pages, 5 figures. Submitted to the 2013 European Control
Conference, Zurich, Switzerlan
Time Blocks Decomposition of Multistage Stochastic Optimization Problems
Multistage stochastic optimization problems are, by essence, complex because
their solutions are indexed both by stages (time) and by uncertainties
(scenarios). Their large scale nature makes decomposition methods appealing.The
most common approaches are time decomposition --- and state-based resolution
methods, like stochastic dynamic programming, in stochastic optimal control ---
and scenario decomposition --- like progressive hedging in stochastic
programming. We present a method to decompose multistage stochastic
optimization problems by time blocks, which covers both stochastic programming
and stochastic dynamic programming. Once established a dynamic programming
equation with value functions defined on the history space (a history is a
sequence of uncertainties and controls), we provide conditions to reduce the
history using a compressed "state" variable. This reduction is done by time
blocks, that is, at stages that are not necessarily all the original unit
stages, and we prove areduced dynamic programming equation. Then, we apply the
reduction method by time blocks to \emph{two time-scales} stochastic
optimization problems and to a novel class of so-called
\emph{decision-hazard-decision} problems, arising in many practical situations,
like in stock management. The \emph{time blocks decomposition} scheme is as
follows: we use dynamic programming at slow time scale where the slow time
scale noises are supposed to be stagewise independent, and we produce slow time
scale Bellman functions; then, we use stochastic programming at short time
scale, within two consecutive slow time steps, with the final short time scale
cost given by the slow time scale Bellman functions, and without assuming
stagewise independence for the short time scale noises
- …
