342 research outputs found
Controller Synthesis for Discrete-Time Polynomial Systems via Occupation Measures
In this paper, we design nonlinear state feedback controllers for
discrete-time polynomial dynamical systems via the occupation measure approach.
We propose the discrete-time controlled Liouville equation, and use it to
formulate the controller synthesis problem as an infinite-dimensional linear
programming problem on measures, which is then relaxed as finite-dimensional
semidefinite programming problems on moments of measures and their duals on
sums-of-squares polynomials. Nonlinear controllers can be extracted from the
solutions to the relaxed problems. The advantage of the occupation measure
approach is that we solve convex problems instead of generally non-convex
problems, and the computational complexity is polynomial in the state and input
dimensions, and hence the approach is more scalable. In addition, we show that
the approach can be applied to over-approximating the backward reachable set of
discrete-time autonomous polynomial systems and the controllable set of
discrete-time polynomial systems under known state feedback control laws. We
illustrate our approach on several dynamical systems
Convex computation of the region of attraction of polynomial control systems
We address the long-standing problem of computing the region of attraction
(ROA) of a target set (e.g., a neighborhood of an equilibrium point) of a
controlled nonlinear system with polynomial dynamics and semialgebraic state
and input constraints. We show that the ROA can be computed by solving an
infinite-dimensional convex linear programming (LP) problem over the space of
measures. In turn, this problem can be solved approximately via a classical
converging hierarchy of convex finite-dimensional linear matrix inequalities
(LMIs). Our approach is genuinely primal in the sense that convexity of the
problem of computing the ROA is an outcome of optimizing directly over system
trajectories. The dual infinite-dimensional LP on nonnegative continuous
functions (approximated by polynomial sum-of-squares) allows us to generate a
hierarchy of semialgebraic outer approximations of the ROA at the price of
solving a sequence of LMI problems with asymptotically vanishing conservatism.
This sharply contrasts with the existing literature which follows an
exclusively dual Lyapunov approach yielding either nonconvex bilinear matrix
inequalities or conservative LMI conditions. The approach is simple and readily
applicable as the outer approximations are the outcome of a single semidefinite
program with no additional data required besides the problem description
Measures and LMI for impulsive optimal control with applications to space rendezvous problems
This paper shows how to find lower bounds on, and sometimes solve globally, a
large class of nonlinear optimal control problems with impulsive controls using
semi-definite programming (SDP). This is done by relaxing an optimal control
problem into a measure differential problem. The manipulation of the measures
by their moments reduces the problem to a convergent series of standard linear
matrix inequality (LMI) relaxations. After providing numerous academic
examples, we apply the method to the impulsive rendezvous of two orbiting
spacecrafts. As the method provides lower bounds on the global infimum, global
optimality of the solutions can be guaranteed numerically by a posteriori
simulations, and we can recover simultaneously the optimal impulse time and
amplitudes by simple linear algebra
Domain Decomposition for Stochastic Optimal Control
This work proposes a method for solving linear stochastic optimal control
(SOC) problems using sum of squares and semidefinite programming. Previous work
had used polynomial optimization to approximate the value function, requiring a
high polynomial degree to capture local phenomena. To improve the scalability
of the method to problems of interest, a domain decomposition scheme is
presented. By using local approximations, lower degree polynomials become
sufficient, and both local and global properties of the value function are
captured. The domain of the problem is split into a non-overlapping partition,
with added constraints ensuring continuity. The Alternating Direction
Method of Multipliers (ADMM) is used to optimize over each domain in parallel
and ensure convergence on the boundaries of the partitions. This results in
improved conditioning of the problem and allows for much larger and more
complex problems to be addressed with improved performance.Comment: 8 pages. Accepted to CDC 201
Linear Hamilton Jacobi Bellman Equations in High Dimensions
The Hamilton Jacobi Bellman Equation (HJB) provides the globally optimal
solution to large classes of control problems. Unfortunately, this generality
comes at a price, the calculation of such solutions is typically intractible
for systems with more than moderate state space size due to the curse of
dimensionality. This work combines recent results in the structure of the HJB,
and its reduction to a linear Partial Differential Equation (PDE), with methods
based on low rank tensor representations, known as a separated representations,
to address the curse of dimensionality. The result is an algorithm to solve
optimal control problems which scales linearly with the number of states in a
system, and is applicable to systems that are nonlinear with stochastic forcing
in finite-horizon, average cost, and first-exit settings. The method is
demonstrated on inverted pendulum, VTOL aircraft, and quadcopter models, with
system dimension two, six, and twelve respectively.Comment: 8 pages. Accepted to CDC 201
- âŠ