87,949 research outputs found
Approximate Dynamic Programming for Constrained Piecewise Affine Systems with Stability and Safety Guarantees
Infinite-horizon optimal control of constrained piecewise affine (PWA)
systems has been approximately addressed by hybrid model predictive control
(MPC), which, however, has computational limitations, both in offline design
and online implementation. In this paper, we consider an alternative approach
based on approximate dynamic programming (ADP), an important class of methods
in reinforcement learning. We accommodate non-convex union-of-polyhedra state
constraints and linear input constraints into ADP by designing PWA penalty
functions. PWA function approximation is used, which allows for a mixed-integer
encoding to implement ADP. The main advantage of the proposed ADP method is its
online computational efficiency. Particularly, we propose two control policies,
which lead to solving a smaller-scale mixed-integer linear program than
conventional hybrid MPC, or a single convex quadratic program, depending on
whether the policy is implicitly determined online or explicitly computed
offline. We characterize the stability and safety properties of the closed-loop
systems, as well as the sub-optimality of the proposed policies, by quantifying
the approximation errors of value functions and policies. We also develop an
offline mixed-integer linear programming-based method to certify the
reliability of the proposed method. Simulation results on an inverted pendulum
with elastic walls and on an adaptive cruise control problem validate the
control performance in terms of constraint satisfaction and CPU time
Semidefinite Relaxations for Stochastic Optimal Control Policies
Recent results in the study of the Hamilton Jacobi Bellman (HJB) equation
have led to the discovery of a formulation of the value function as a linear
Partial Differential Equation (PDE) for stochastic nonlinear systems with a
mild constraint on their disturbances. This has yielded promising directions
for research in the planning and control of nonlinear systems. This work
proposes a new method obtaining approximate solutions to these linear
stochastic optimal control (SOC) problems. A candidate polynomial with variable
coefficients is proposed as the solution to the SOC problem. A Sum of Squares
(SOS) relaxation is then taken to the partial differential constraints, leading
to a hierarchy of semidefinite relaxations with improving sub-optimality gap.
The resulting approximate solutions are shown to be guaranteed over- and
under-approximations for the optimal value function.Comment: Preprint. Accepted to American Controls Conference (ACC) 2014 in
Portland, Oregon. 7 pages, colo
Approximate Dynamic Programming via Sum of Squares Programming
We describe an approximate dynamic programming method for stochastic control
problems on infinite state and input spaces. The optimal value function is
approximated by a linear combination of basis functions with coefficients as
decision variables. By relaxing the Bellman equation to an inequality, one
obtains a linear program in the basis coefficients with an infinite set of
constraints. We show that a recently introduced method, which obtains convex
quadratic value function approximations, can be extended to higher order
polynomial approximations via sum of squares programming techniques. An
approximate value function can then be computed offline by solving a
semidefinite program, without having to sample the infinite constraint. The
policy is evaluated online by solving a polynomial optimization problem, which
also turns out to be convex in some cases. We experimentally validate the
method on an autonomous helicopter testbed using a 10-dimensional helicopter
model.Comment: 7 pages, 5 figures. Submitted to the 2013 European Control
Conference, Zurich, Switzerlan
- …