126 research outputs found
Learning Stable Koopman Models for Identification and Control of Dynamical Systems
Learning models of dynamical systems from data is a widely-studied problem in control theory and machine learning. One recent approach for modelling nonlinear systems considers the class of Koopman models, which embeds the nonlinear dynamics in a higher-dimensional linear subspace. Learning a Koopman embedding would allow for the analysis and control of nonlinear systems using tools from linear systems theory. Many recent methods have been proposed for data-driven learning of such Koopman embeddings, but most of these methods do not consider the stability of the Koopman model.
Stability is an important and desirable property for models of dynamical systems. Unstable models tend to be non-robust to input perturbations and can produce unbounded outputs, which are both undesirable when the model is used for prediction and control. In addition, recent work has shown that stability guarantees may act as a regularizer for model fitting. As such, a natural direction would be to construct Koopman models with inherent stability guarantees.
Two new classes of Koopman models are proposed that bridge the gap between Koopman-based methods and learning stable nonlinear models. The first model class is guaranteed to be stable, while the second is guaranteed to be stabilizable with an explicit stabilizing controller that renders the model stable in closed-loop. Furthermore, these models are unconstrained in their parameter sets, thereby enabling efficient optimization via gradient-based methods. Theoretical connections between the stability of Koopman models and forms of nonlinear stability such as contraction are established. To demonstrate the effect of the stability guarantees, the stable Koopman model is applied to a system identification problem, while the stabilizable model is applied to an imitation learning problem. Experimental results show empirically that the proposed models achieve better performance over prior methods without stability guarantees
Stability Guarantees for Continuous RL Control
Lack of stability guarantees strongly limits the use of reinforcement
learning (RL) in safety critical robotic applications. Here we propose a
control system architecture for continuous RL control and derive corresponding
stability theorems via contraction analysis, yielding constraints on the
network weights to ensure stability. The control architecture can be
implemented in general RL algorithms and improve their stability, robustness,
and sample efficiency. We demonstrate the importance and benefits of such
guarantees for RL on two standard examples, PPO learning of a 2D problem and
HIRO learning of maze tasks
Analysis and design of model predictive control frameworks for dynamic operation -- An overview
This article provides an overview of model predictive control (MPC)
frameworks for dynamic operation of nonlinear constrained systems. Dynamic
operation is often an integral part of the control objective, ranging from
tracking of reference signals to the general economic operation of a plant
under online changing time-varying operating conditions. We focus on the
particular challenges that arise when dealing with such more general control
goals and present methods that have emerged in the literature to address these
issues. The goal of this article is to present an overview of the
state-of-the-art techniques, providing a diverse toolkit to apply and further
develop MPC formulations that can handle the challenges intrinsic to dynamic
operation. We also critically assess the applicability of the different
research directions, discussing limitations and opportunities for further
researc
- …