26,405 research outputs found
Backstepping controller synthesis and characterizations of incremental stability
Incremental stability is a property of dynamical and control systems,
requiring the uniform asymptotic stability of every trajectory, rather than
that of an equilibrium point or a particular time-varying trajectory. Similarly
to stability, Lyapunov functions and contraction metrics play important roles
in the study of incremental stability. In this paper, we provide
characterizations and descriptions of incremental stability in terms of
existence of coordinate-invariant notions of incremental Lyapunov functions and
contraction metrics, respectively. Most design techniques providing controllers
rendering control systems incrementally stable have two main drawbacks: they
can only be applied to control systems in either parametric-strict-feedback or
strict-feedback form, and they require these control systems to be smooth. In
this paper, we propose a design technique that is applicable to larger classes
of (not necessarily smooth) control systems. Moreover, we propose a recursive
way of constructing contraction metrics (for smooth control systems) and
incremental Lyapunov functions which have been identified as a key tool
enabling the construction of finite abstractions of nonlinear control systems,
the approximation of stochastic hybrid systems, source-code model checking for
nonlinear dynamical systems and so on. The effectiveness of the proposed
results in this paper is illustrated by synthesizing a controller rendering a
non-smooth control system incrementally stable as well as constructing its
finite abstraction, using the computed incremental Lyapunov function.Comment: 23 pages, 2 figure
Stabilizing Randomly Switched Systems
This article is concerned with stability analysis and stabilization of
randomly switched systems under a class of switching signals. The switching
signal is modeled as a jump stochastic (not necessarily Markovian) process
independent of the system state; it selects, at each instant of time, the
active subsystem from a family of systems. Sufficient conditions for stochastic
stability (almost sure, in the mean, and in probability) of the switched system
are established when the subsystems do not possess control inputs, and not
every subsystem is required to be stable. These conditions are employed to
design stabilizing feedback controllers when the subsystems are affine in
control. The analysis is carried out with the aid of multiple Lyapunov-like
functions, and the analysis results together with universal formulae for
feedback stabilization of nonlinear systems constitute our primary tools for
control designComment: 22 pages. Submitte
Kuhn-Tucker-based stability conditions for systems with saturation
This paper presents a new approach to deriving stability conditions for continuous-time linear systems interconnected with a saturation. The method presented can be extended to handle a dead-zone, or in general, nonlinearities in the form of piecewise linear functions. By representing the saturation as a constrained optimization problem, the necessary (Kuhn-Tucker) conditions for optimality are used to derive linear and quadratic constraints which characterize the saturation. After selecting a candidate Lyapunov function, we pose the question of whether the Lyapunov function is decreasing along trajectories of the system as an implication between the necessary conditions derived from the saturation optimization, and the time derivative of the Lyapunov function. This leads to stability conditions in terms of linear matrix inequalities, which are obtained by an application of the S-procedure to the implication. An example is provided where the proposed technique is compared and contrasted with previous analysis methods
Learning Lyapunov (Potential) Functions from Counterexamples and Demonstrations
We present a technique for learning control Lyapunov (potential) functions,
which are used in turn to synthesize controllers for nonlinear dynamical
systems. The learning framework uses a demonstrator that implements a
black-box, untrusted strategy presumed to solve the problem of interest, a
learner that poses finitely many queries to the demonstrator to infer a
candidate function and a verifier that checks whether the current candidate is
a valid control Lyapunov function. The overall learning framework is iterative,
eliminating a set of candidates on each iteration using the counterexamples
discovered by the verifier and the demonstrations over these counterexamples.
We prove its convergence using ellipsoidal approximation techniques from convex
optimization. We also implement this scheme using nonlinear MPC controllers to
serve as demonstrators for a set of state and trajectory stabilization problems
for nonlinear dynamical systems. Our approach is able to synthesize relatively
simple polynomial control Lyapunov functions, and in that process replace the
MPC using a guaranteed and computationally less expensive controller
- …
