6 research outputs found

    A Comparison of LPV Gain Scheduling and Control Contraction Metrics for Nonlinear Control

    Full text link
    Gain-scheduled control based on linear parameter-varying (LPV) models derived from local linearizations is a widespread nonlinear technique for tracking time-varying setpoints. Recently, a nonlinear control scheme based on Control Contraction Metrics (CCMs) has been developed to track arbitrary admissible trajectories. This paper presents a comparison study of these two approaches. We show that the CCM based approach is an extended gain-scheduled control scheme which achieves global reference-independent stability and performance through an exact control realization which integrates a series of local LPV controllers on a particular path between the current and reference states.Comment: IFAC LPVS 201

    Contraction Analysis of Monotone Systems via Separable Functions

    Get PDF
    In this paper, we study incremental stability of monotone nonlinear systems through contraction analysis. We provide sufficient conditions for incremental asymptotic stability in terms of the Lie derivatives of differential one-forms or Lie brackets of vector fields. These conditions can be viewed as sum- or max-separable conditions, respectively. For incremental exponential stability, we show that the existence of such separable functions is both necessary and sufficient under standard assumptions for the converse Lyapunov theorem of exponential stability. As a by-product, we also provide necessary and sufficient conditions for exponential stability of positive linear time-varying systems. The results are illustrated through examples

    A Behavioral Approach to Robust Machine Learning

    Get PDF
    Machine learning is revolutionizing almost all fields of science and technology and has been proposed as a pathway to solving many previously intractable problems such as autonomous driving and other complex robotics tasks. While the field has demonstrated impressive results on certain problems, many of these results have not translated to applications in physical systems, partly due to the cost of system fail- ure and partly due to the difficulty of ensuring reliable and robust model behavior. Deep neural networks, for instance, have simultaneously demonstrated both incredible performance in game playing and image processing, and remarkable fragility. This combination of high average performance and a catastrophically bad worst case performance presents a serious danger as deep neural networks are currently being used in safety critical tasks such as assisted driving. In this thesis, we propose a new approach to training models that have built in robustness guarantees. Our approach to ensuring stability and robustness of the models trained is distinct from prior methods; where prior methods learn a model and then attempt to verify robustness/stability, we directly optimize over sets of models where the necessary properties are known to hold. Specifically, we apply methods from robust and nonlinear control to the analysis and synthesis of recurrent neural networks, equilibrium neural networks, and recurrent equilibrium neural networks. The techniques developed allow us to enforce properties such as incremental stability, incremental passivity, and incremental l2 gain bounds / Lipschitz bounds. A central consideration in the development of our model sets is the difficulty of fitting models. All models can be placed in the image of a convex set, or even R^N , allowing useful properties to be easily imposed during the training procedure via simple interior point methods, penalty methods, or unconstrained optimization. In the final chapter, we study the problem of learning networks of interacting models with guarantees that the resulting networked system is stable and/or monotone, i.e., the order relations between states are preserved. While our approach to learning in this chapter is similar to the previous chapters, the model set that we propose has a separable structure that allows for the scalable and distributed identification of large-scale systems via the alternating directions method of multipliers (ADMM)
    corecore