1,911 research outputs found

    Contracting Nonlinear Observers: Convex Optimization and Learning from Data

    Full text link
    A new approach to design of nonlinear observers (state estimators) is proposed. The main idea is to (i) construct a convex set of dynamical systems which are contracting observers for a particular system, and (ii) optimize over this set for one which minimizes a bound on state-estimation error on a simulated noisy data set. We construct convex sets of continuous-time and discrete-time observers, as well as contracting sampled-data observers for continuous-time systems. Convex bounds for learning are constructed using Lagrangian relaxation. The utility of the proposed methods are verified using numerical simulation.Comment: conference submissio

    Evaluation of stochastic effects on biomolecular networks using the generalised Nyquist stability criterion

    Get PDF
    Abstract—Stochastic differential equations are now commonly used to model biomolecular networks in systems biology, and much recent research has been devoted to the development of methods to analyse their stability properties. Stability analysis of such systems may be performed using the Laplace transform, which requires the calculation of the exponential matrix involving time symbolically. However, the calculation of the symbolic exponential matrix is not feasible for problems of even moderate size, as the required computation time increases exponentially with the matrix order. To address this issue, we present a novel method for approximating the Laplace transform which does not require the exponential matrix to be calculated explicitly. The calculation time associated with the proposed method does not increase exponentially with the size of the system, and the approximation error is shown to be of the same order as existing methods. Using this approximation method, we show how a straightforward application of the generalized Nyquist stability criterion provides necessary and sufficient conditions for the stability of stochastic biomolecular networks. The usefulness and computational efficiency of the proposed method is illustrated through its application to the problem of analysing a model for limit-cycle oscillations in cAMP during aggregation of Dictyostelium cells

    Robust Controller Design for Stochastic Nonlinear Systems via Convex Optimization

    Get PDF
    This paper presents ConVex optimization-based Stochastic steady-state Tracking Error Minimization (CV-STEM), a new state feedback control framework for a class of Ito stochastic nonlinear systems and Lagrangian systems. Its strength lies in computing the control input by an optimal contraction metric, which greedily minimizes an upper bound of the steady-state mean squared tracking error of the system trajectories. Although the problem of minimizing the bound is nonlinear, its equivalent convex formulation is proposed utilizing state-dependent coefficient parameterizations of the nonlinear system equation. It is shown using stochastic incremental contraction analysis that the CV-STEM provides a sufficient guarantee for exponential boundedness of the error for all time with L₂-robustness properties. For the sake of its sampling-based implementation, we present discrete-time stochastic contraction analysis with respect to a state- and time-dependent metric along with its explicit connection to continuous-time cases. We validate the superiority of the CV-STEM to PID, H∞, and given nonlinear control for spacecraft attitude control and synchronization problems

    Contraction Analysis of Discrete-Time Stochastic Systems

    Get PDF
    非線形確率モデルの安定性理論 --機械学習と自動化技術の橋渡し--. 京都大学プレスリリース. 2023-07-05.In this paper, we develop a novel contraction framework for stability analysis of discrete-time nonlinear systems with parameters following stochastic processes. For general stochastic processes, we first provide a sufficient condition for uniform incremental exponential stability (UIES) in the first moment with respect to a Riemannian metric. Then, focusing on the Euclidean distance, we present a necessary and sufficient condition for UIES in the second moment. By virtue of studying general stochastic processes, we can readily derive UIES conditions for special classes of processes, e.g., independent and identically distributed (i.i.d.) processes and Markov processes, which is demonstrated as selected applications of our results

    Backstepping controller synthesis and characterizations of incremental stability

    Full text link
    Incremental stability is a property of dynamical and control systems, requiring the uniform asymptotic stability of every trajectory, rather than that of an equilibrium point or a particular time-varying trajectory. Similarly to stability, Lyapunov functions and contraction metrics play important roles in the study of incremental stability. In this paper, we provide characterizations and descriptions of incremental stability in terms of existence of coordinate-invariant notions of incremental Lyapunov functions and contraction metrics, respectively. Most design techniques providing controllers rendering control systems incrementally stable have two main drawbacks: they can only be applied to control systems in either parametric-strict-feedback or strict-feedback form, and they require these control systems to be smooth. In this paper, we propose a design technique that is applicable to larger classes of (not necessarily smooth) control systems. Moreover, we propose a recursive way of constructing contraction metrics (for smooth control systems) and incremental Lyapunov functions which have been identified as a key tool enabling the construction of finite abstractions of nonlinear control systems, the approximation of stochastic hybrid systems, source-code model checking for nonlinear dynamical systems and so on. The effectiveness of the proposed results in this paper is illustrated by synthesizing a controller rendering a non-smooth control system incrementally stable as well as constructing its finite abstraction, using the computed incremental Lyapunov function.Comment: 23 pages, 2 figure

    An LMI Framework for Contraction-based Nonlinear Control Design by Derivatives of Gaussian Process Regression

    Full text link
    Contraction theory formulates the analysis of nonlinear systems in terms of Jacobian matrices. Although this provides the potential to develop a linear matrix inequality (LMI) framework for nonlinear control design, conditions are imposed not on controllers but on their partial derivatives, which makes control design challenging. In this paper, we illustrate this so-called integrability problem can be solved by a non-standard use of Gaussian process regression (GPR) for parameterizing controllers and then establish an LMI framework of contraction-based control design for nonlinear discrete-time systems, as an easy-to-implement tool. Later on, we consider the case where the drift vector fields are unknown and employ GPR for functional fitting as its standard use. GPR describes learning errors in terms of probability, and thus we further discuss how to incorporate stochastic learning errors into the proposed LMI framework
    corecore