952 research outputs found
Analysis of switched and hybrid systems - beyond piecewise quadratic methods
This paper presents a method for stability analysis of switched and hybrid systems using polynomial and piecewise polynomial Lyapunov functions. Computation of such functions can be performed using convex optimization, based on the sum of squares decomposition of multivariate polynomials. The analysis yields several improvements over previous methods and opens up new possibilities, including the possibility of treating nonlinear vector fields and/or switching surfaces and parametric robustness analysis in a unified way
A Converse Sum of Squares Lyapunov Result with a Degree Bound
Sum of Squares programming has been used extensively over the past decade for
the stability analysis of nonlinear systems but several questions remain
unanswered. In this paper, we show that exponential stability of a polynomial
vector field on a bounded set implies the existence of a Lyapunov function
which is a sum-of-squares of polynomials. In particular, the main result states
that if a system is exponentially stable on a bounded nonempty set, then there
exists an SOS Lyapunov function which is exponentially decreasing on that
bounded set. The proof is constructive and uses the Picard iteration. A bound
on the degree of this converse Lyapunov function is also given. This result
implies that semidefinite programming can be used to answer the question of
stability of a polynomial vector field with a bound on complexity
Semi-definite programming and functional inequalities for Distributed Parameter Systems
We study one-dimensional integral inequalities, with quadratic integrands, on
bounded domains. Conditions for these inequalities to hold are formulated in
terms of function matrix inequalities which must hold in the domain of
integration. For the case of polynomial function matrices, sufficient
conditions for positivity of the matrix inequality and, therefore, for the
integral inequalities are cast as semi-definite programs. The inequalities are
used to study stability of linear partial differential equations.Comment: 8 pages, 5 figure
A Convex Approach to Hydrodynamic Analysis
We study stability and input-state analysis of three dimensional (3D)
incompressible, viscous flows with invariance in one direction. By taking
advantage of this invariance property, we propose a class of Lyapunov and
storage functionals. We then consider exponential stability, induced L2-norms,
and input-to-state stability (ISS). For streamwise constant flows, we formulate
conditions based on matrix inequalities. We show that in the case of polynomial
laminar flow profiles the matrix inequalities can be checked via convex
optimization. The proposed method is illustrated by an example of rotating
Couette flow.Comment: Preliminary version submitted to 54rd IEEE Conference on Decision and
Control, Dec. 15-18, 2015, Osaka, Japa
Block-Diagonal Solutions to Lyapunov Inequalities and Generalisations of Diagonal Dominance
Diagonally dominant matrices have many applications in systems and control
theory. Linear dynamical systems with scaled diagonally dominant drift
matrices, which include stable positive systems, allow for scalable stability
analysis. For example, it is known that Lyapunov inequalities for this class of
systems admit diagonal solutions. In this paper, we present an extension of
scaled diagonally dominance to block partitioned matrices. We show that our
definition describes matrices admitting block-diagonal solutions to Lyapunov
inequalities and that these solutions can be computed using linear algebraic
tools. We also show how in some cases the Lyapunov inequalities can be
decoupled into a set of lower dimensional linear matrix inequalities, thus
leading to improved scalability. We conclude by illustrating some advantages
and limitations of our results with numerical examples.Comment: 6 pages, to appear in Proceedings of the Conference on Decision and
Control 201
Delineating Parameter Unidentifiabilities in Complex Models
Scientists use mathematical modelling to understand and predict the
properties of complex physical systems. In highly parameterised models there
often exist relationships between parameters over which model predictions are
identical, or nearly so. These are known as structural or practical
unidentifiabilities, respectively. They are hard to diagnose and make reliable
parameter estimation from data impossible. They furthermore imply the existence
of an underlying model simplification. We describe a scalable method for
detecting unidentifiabilities, and the functional relations defining them, for
generic models. This allows for model simplification, and appreciation of which
parameters (or functions thereof) cannot be estimated from data. Our algorithm
can identify features such as redundant mechanisms and fast timescale
subsystems, as well as the regimes in which such approximations are valid. We
base our algorithm on a novel quantification of regional parametric
sensitivity: multiscale sloppiness. Traditionally, the link between parametric
sensitivity and the conditioning of the parameter estimation problem is made
locally, through the Fisher Information Matrix. This is valid in the regime of
infinitesimal measurement uncertainty. We demonstrate the duality between
multiscale sloppiness and the geometry of confidence regions surrounding
parameter estimates made where measurement uncertainty is non-negligible.
Further theoretical relationships are provided linking multiscale sloppiness to
the Likelihood-ratio test. From this, we show that a local sensitivity analysis
(as typically done) is insufficient for determining the reliability of
parameter estimation, even with simple (non)linear systems. Our algorithm
provides a tractable alternative. We finally apply our methods to a
large-scale, benchmark Systems Biology model of NF-B, uncovering
previously unknown unidentifiabilities
Sparse sum-of-squares (SOS) optimization: A bridge between DSOS/SDSOS and SOS optimization for sparse polynomials
Optimization over non-negative polynomials is fundamental for nonlinear
systems analysis and control. We investigate the relation between three
tractable relaxations for optimizing over sparse non-negative polynomials:
sparse sum-of-squares (SSOS) optimization, diagonally dominant sum-of-squares
(DSOS) optimization, and scaled diagonally dominant sum-of-squares (SDSOS)
optimization. We prove that the set of SSOS polynomials, an inner approximation
of the cone of SOS polynomials, strictly contains the spaces of sparse
DSOS/SDSOS polynomials. When applicable, therefore, SSOS optimization is less
conservative than its DSOS/SDSOS counterparts. Numerical results for
large-scale sparse polynomial optimization problems demonstrate this fact, and
also that SSOS optimization can be faster than DSOS/SDSOS methods despite
requiring the solution of semidefinite programs instead of less expensive
linear/second-order cone programs.Comment: 9 pages, 3 figure
Block Factor-width-two Matrices and Their Applications to Semidefinite and Sum-of-squares Optimization
Semidefinite and sum-of-squares (SOS) optimization are fundamental
computational tools in many areas, including linear and nonlinear systems
theory. However, the scale of problems that can be addressed reliably and
efficiently is still limited. In this paper, we introduce a new notion of
\emph{block factor-width-two matrices} and build a new hierarchy of inner and
outer approximations of the cone of positive semidefinite (PSD) matrices. This
notion is a block extension of the standard factor-width-two matrices, and
allows for an improved inner-approximation of the PSD cone. In the context of
SOS optimization, this leads to a block extension of the \emph{scaled
diagonally dominant sum-of-squares (SDSOS)} polynomials. By varying a matrix
partition, the notion of block factor-width-two matrices can balance a
trade-off between the computation scalability and solution quality for solving
semidefinite and SOS optimization. Numerical experiments on large-scale
instances confirm our theoretical findings.Comment: 26 pages, 5 figures. Added a new section on the approximation quality
analysis using block factor-width-two matrices. Code is available through
https://github.com/zhengy09/SDPf
PIETOOLS: A Matlab Toolbox for Manipulation and Optimization of Partial Integral Operators
In this paper, we present PIETOOLS, a MATLAB toolbox for the construction and
handling of Partial Integral (PI) operators. The toolbox introduces a new class
of MATLAB object, opvar, for which standard MATLAB matrix operation syntax
(e.g. +, *, ' e tc.) is defined. PI operators are a generalization of bounded
linear operators on infinite-dimensional spaces that form a *-subalgebra with
two binary operations (addition and composition) on the space RxL2. These
operators frequently appear in analysis and control of infinite-dimensional
systems such as Partial Differential equations (PDE) and Time-delay systems
(TDS). Furthermore, PIETOOLS can: declare opvar decision variables, add
operator positivity constraints, declare an objective function, and solve the
resulting optimization problem using a syntax similar to the sdpvar class in
YALMIP. Use of the resulting Linear Operator Inequalities (LOIs) are
demonstrated on several examples, including stability analysis of a PDE,
bounding operator norms, and verifying integral inequalities. The result is
that PIETOOLS, packaged with SOSTOOLS and MULTIPOLY, offers a scalable,
user-friendly and computationally efficient toolbox for parsing, performing
algebraic operations, setting up and solving convex optimization problems on PI
operators
- …