563 research outputs found
Lower Bounds on Complexity of Lyapunov Functions for Switched Linear Systems
We show that for any positive integer , there are families of switched
linear systems---in fixed dimension and defined by two matrices only---that are
stable under arbitrary switching but do not admit (i) a polynomial Lyapunov
function of degree , or (ii) a polytopic Lyapunov function with facets, or (iii) a piecewise quadratic Lyapunov function with
pieces. This implies that there cannot be an upper bound on the size of the
linear and semidefinite programs that search for such stability certificates.
Several constructive and non-constructive arguments are presented which connect
our problem to known (and rather classical) results in the literature regarding
the finiteness conjecture, undecidability, and non-algebraicity of the joint
spectral radius. In particular, we show that existence of an extremal piecewise
algebraic Lyapunov function implies the finiteness property of the optimal
product, generalizing a result of Lagarias and Wang. As a corollary, we prove
that the finiteness property holds for sets of matrices with an extremal
Lyapunov function belonging to some of the most popular function classes in
controls
Formal Synthesis of Lyapunov Neural Networks
We propose an automatic and formally sound method for synthesising Lyapunov
functions for the asymptotic stability of autonomous non-linear systems.
Traditional methods are either analytical and require manual effort or are
numerical but lack of formal soundness. Symbolic computational methods for
Lyapunov functions, which are in between, give formal guarantees but are
typically semi-automatic because they rely on the user to provide appropriate
function templates. We propose a method that finds Lyapunov functions fully
automaticallyusing machine learningwhile also providing formal
guaranteesusing satisfiability modulo theories (SMT). We employ a
counterexample-guided approach where a numerical learner and a symbolic
verifier interact to construct provably correct Lyapunov neural networks
(LNNs). The learner trains a neural network that satisfies the Lyapunov
criteria for asymptotic stability over a samples set; the verifier proves via
SMT solving that the criteria are satisfied over the whole domain or augments
the samples set with counterexamples. Our method supports neural networks with
polynomial activation functions and multiple depth and width, which display
wide learning capabilities. We demonstrate our method over several non-trivial
benchmarks and compare it favourably against a numerical optimisation-based
approach, a symbolic template-based approach, and a cognate LNN-based approach.
Our method synthesises Lyapunov functions faster and over wider spatial domains
than the alternatives, yet providing stronger or equal guarantees
Conic Optimization Theory: Convexification Techniques and Numerical Algorithms
Optimization is at the core of control theory and appears in several areas of
this field, such as optimal control, distributed control, system
identification, robust control, state estimation, model predictive control and
dynamic programming. The recent advances in various topics of modern
optimization have also been revamping the area of machine learning. Motivated
by the crucial role of optimization theory in the design, analysis, control and
operation of real-world systems, this tutorial paper offers a detailed overview
of some major advances in this area, namely conic optimization and its emerging
applications. First, we discuss the importance of conic optimization in
different areas. Then, we explain seminal results on the design of hierarchies
of convex relaxations for a wide range of nonconvex problems. Finally, we study
different numerical algorithms for large-scale conic optimization problems.Comment: 18 page
Linearly Solvable Stochastic Control Lyapunov Functions
This paper presents a new method for synthesizing stochastic control Lyapunov
functions for a class of nonlinear stochastic control systems. The technique
relies on a transformation of the classical nonlinear Hamilton-Jacobi-Bellman
partial differential equation to a linear partial differential equation for a
class of problems with a particular constraint on the stochastic forcing. This
linear partial differential equation can then be relaxed to a linear
differential inclusion, allowing for relaxed solutions to be generated using
sum of squares programming. The resulting relaxed solutions are in fact
viscosity super/subsolutions, and by the maximum principle are pointwise upper
and lower bounds to the underlying value function, even for coarse polynomial
approximations. Furthermore, the pointwise upper bound is shown to be a
stochastic control Lyapunov function, yielding a method for generating
nonlinear controllers with pointwise bounded distance from the optimal cost
when using the optimal controller. These approximate solutions may be computed
with non-increasing error via a hierarchy of semidefinite optimization
problems. Finally, this paper develops a-priori bounds on trajectory
suboptimality when using these approximate value functions, as well as
demonstrates that these methods, and bounds, can be applied to a more general
class of nonlinear systems not obeying the constraint on stochastic forcing.
Simulated examples illustrate the methodology.Comment: Published in SIAM Journal of Control and Optimizatio
Analysis of robust neural networks for control
The prevalence of neural networks in many application areas is expanding at an increasing rate, with the potential to provide huge benefits across numerous sectors. However, one of the greatest shortcomings of a trained neural network is its sensitivity to adversarial attacks. It is becoming clear that providing robust guarantees on systems that use neural networks is very important, especially in safety-critical applications. However, quantifying their safety and robustness properties has proven challenging due to the non-linearities of the activation functions inside the neural network.
This thesis addresses this problem from many different perspectives. Firstly, we investigate the sparsity that arises in a recently proposed semidefinite programming framework to verify a fully connected feed-forward neural network. We reformulate and exploit the sparsity in the optimisation problem, showing a significant speed-up in computation. In addition, we approach the problem using polynomial optimisation and show that by using the Positivstellensatz, bounds on the robustness guarantees can be tightened significantly over other popular methods. We then reformulate this approach to simultaneously exploit the sparsity in the problem, whilst improving the accuracy.
Neural networks have also seen a recent increased use in feedback control systems. This is primarily because they have the potential to improve the performance of these systems compared to traditional controllers, due to their ability to act as general function approximators. However, since feedback systems are usually subject to external perturbations and neural networks are sensitive to small changes, providing robustness guarantees has proven challenging.
In this thesis, we analyse non-linear systems that contain neural network controllers. We first address this problem by computing outer-approximations of the reachable sets using sparse polynomial optimisation. We then use a Sum of Squares programming framework to compute the stability of these systems. Both of these approaches provide better robustness guarantees over existing methods. Finally, we extend these approaches to neural network controllers with rational activation functions. We then propose a method to recover a stabilising controller from a Sum of Squares program and apply it to a modified rational neural network controller
Learning Lyapunov-Stable Polynomial Dynamical Systems Through Imitation
Imitation learning is a paradigm to address complex motion planning problems
by learning a policy to imitate an expert's behavior. However, relying solely
on the expert's data might lead to unsafe actions when the robot deviates from
the demonstrated trajectories. Stability guarantees have previously been
provided utilizing nonlinear dynamical systems, acting as high-level motion
planners, in conjunction with the Lyapunov stability theorem. Yet, these
methods are prone to inaccurate policies, high computational cost, sample
inefficiency, or quasi stability when replicating complex and highly nonlinear
trajectories. To mitigate this problem, we present an approach for learning a
globally stable nonlinear dynamical system as a motion planning policy. We
model the nonlinear dynamical system as a parametric polynomial and learn the
polynomial's coefficients jointly with a Lyapunov candidate. To showcase its
success, we compare our method against the state of the art in simulation and
conduct real-world experiments with the Kinova Gen3 Lite manipulator arm. Our
experiments demonstrate the sample efficiency and reproduction accuracy of our
method for various expert trajectories, while remaining stable in the face of
perturbations.Comment: In 7th Annual Conference on Robot Learning 2023 Aug 3
Automated and Sound Synthesis of Lyapunov Functions with SMT Solvers
In this paper we employ SMT solvers to soundly synthesise Lyapunov functions
that assert the stability of a given dynamical model. The search for a Lyapunov
function is framed as the satisfiability of a second-order logical formula,
asking whether there exists a function satisfying a desired specification
(stability) for all possible initial conditions of the model. We synthesise
Lyapunov functions for linear, non-linear (polynomial), and for parametric
models. For non-linear models, the algorithm also determines a region of
validity for the Lyapunov function. We exploit an inductive framework to
synthesise Lyapunov functions, starting from parametric templates. The
inductive framework comprises two elements: a learner proposes a Lyapunov
function, and a verifier checks its validity - its lack is expressed via a
counterexample (a point over the state space), for further use by the learner.
Whilst the verifier uses the SMT solver Z3, thus ensuring the overall soundness
of the procedure, we examine two alternatives for the learner: a numerical
approach based on the optimisation tool Gurobi, and a sound approach based
again on Z3. The overall technique is evaluated over a broad set of benchmarks,
which shows that this methodology not only scales to 10-dimensional models
within reasonable computational time, but also offers a novel soundness proof
for the generated Lyapunov functions and their domains of validity
- …