597 research outputs found
Robust Stability Analysis of Nonlinear Hybrid Systems
We present a methodology for robust stability analysis of nonlinear hybrid systems, through the algorithmic construction of polynomial and piecewise polynomial Lyapunov-like functions using convex optimization and in particular the sum of squares decomposition of multivariate polynomials. Several improvements compared to previous approaches are discussed, such as treating in a unified way polynomial switching surfaces and robust stability analysis for nonlinear hybrid systems
Convex Chance Constrained Model Predictive Control
We consider the Chance Constrained Model Predictive Control problem for
polynomial systems subject to disturbances. In this problem, we aim at finding
optimal control input for given disturbed dynamical system to minimize a given
cost function subject to probabilistic constraints, over a finite horizon. The
control laws provided have a predefined (low) risk of not reaching the desired
target set. Building on the theory of measures and moments, a sequence of
finite semidefinite programmings are provided, whose solution is shown to
converge to the optimal solution of the original problem. Numerical examples
are presented to illustrate the computational performance of the proposed
approach.Comment: This work has been submitted to the 55th IEEE Conference on Decision
and Contro
Controller Synthesis for Discrete-Time Polynomial Systems via Occupation Measures
In this paper, we design nonlinear state feedback controllers for
discrete-time polynomial dynamical systems via the occupation measure approach.
We propose the discrete-time controlled Liouville equation, and use it to
formulate the controller synthesis problem as an infinite-dimensional linear
programming problem on measures, which is then relaxed as finite-dimensional
semidefinite programming problems on moments of measures and their duals on
sums-of-squares polynomials. Nonlinear controllers can be extracted from the
solutions to the relaxed problems. The advantage of the occupation measure
approach is that we solve convex problems instead of generally non-convex
problems, and the computational complexity is polynomial in the state and input
dimensions, and hence the approach is more scalable. In addition, we show that
the approach can be applied to over-approximating the backward reachable set of
discrete-time autonomous polynomial systems and the controllable set of
discrete-time polynomial systems under known state feedback control laws. We
illustrate our approach on several dynamical systems
A Sums-of-Squares Extension of Policy Iterations
In order to address the imprecision often introduced by widening operators in
static analysis, policy iteration based on min-computations amounts to
considering the characterization of reachable value set of a program as an
iterative computation of policies, starting from a post-fixpoint. Computing
each policy and the associated invariant relies on a sequence of numerical
optimizations. While the early research efforts relied on linear programming
(LP) to address linear properties of linear programs, the current state of the
art is still limited to the analysis of linear programs with at most quadratic
invariants, relying on semidefinite programming (SDP) solvers to compute
policies, and LP solvers to refine invariants.
We propose here to extend the class of programs considered through the use of
Sums-of-Squares (SOS) based optimization. Our approach enables the precise
analysis of switched systems with polynomial updates and guards. The analysis
presented has been implemented in Matlab and applied on existing programs
coming from the system control literature, improving both the range of
analyzable systems and the precision of previously handled ones.Comment: 29 pages, 4 figure
Design of First-Order Optimization Algorithms via Sum-of-Squares Programming
In this paper, we propose a framework based on sum-of-squares programming to
design iterative first-order optimization algorithms for smooth and strongly
convex problems. Our starting point is to develop a polynomial matrix
inequality as a sufficient condition for exponential convergence of the
algorithm. The entries of this matrix are polynomial functions of the unknown
parameters (exponential decay rate, stepsize, momentum coefficient, etc.). We
then formulate a polynomial optimization, in which the objective is to optimize
the exponential decay rate over the parameters of the algorithm. Finally, we
use sum-of-squares programming as a tractable relaxation of the proposed
polynomial optimization problem. We illustrate the utility of the proposed
framework by designing a first-order algorithm that shares the same structure
as Nesterov's accelerated gradient method
- …