137,021 research outputs found

    Multi-Parametric Extremum Seeking-based Auto-Tuning for Robust Input-Output Linearization Control

    Full text link
    We study in this paper the problem of iterative feedback gains tuning for a class of nonlinear systems. We consider Input-Output linearizable nonlinear systems with additive uncertainties. We first design a nominal Input-Output linearization-based controller that ensures global uniform boundedness of the output tracking error dynamics. Then, we complement the robust controller with a model-free multi-parametric extremum seeking (MES) control to iteratively auto-tune the feedback gains. We analyze the stability of the whole controller, i.e. robust nonlinear controller plus model-free learning algorithm. We use numerical tests to demonstrate the performance of this method on a mechatronics example.Comment: To appear at the IEEE CDC 201

    Learning an Approximate Model Predictive Controller with Guarantees

    Full text link
    A supervised learning framework is proposed to approximate a model predictive controller (MPC) with reduced computational complexity and guarantees on stability and constraint satisfaction. The framework can be used for a wide class of nonlinear systems. Any standard supervised learning technique (e.g. neural networks) can be employed to approximate the MPC from samples. In order to obtain closed-loop guarantees for the learned MPC, a robust MPC design is combined with statistical learning bounds. The MPC design ensures robustness to inaccurate inputs within given bounds, and Hoeffding's Inequality is used to validate that the learned MPC satisfies these bounds with high confidence. The result is a closed-loop statistical guarantee on stability and constraint satisfaction for the learned MPC. The proposed learning-based MPC framework is illustrated on a nonlinear benchmark problem, for which we learn a neural network controller with guarantees.Comment: 6 pages, 3 figures, to appear in IEEE Control Systems Letter

    Learning Stable Koopman Models for Identification and Control of Dynamical Systems

    Get PDF
    Learning models of dynamical systems from data is a widely-studied problem in control theory and machine learning. One recent approach for modelling nonlinear systems considers the class of Koopman models, which embeds the nonlinear dynamics in a higher-dimensional linear subspace. Learning a Koopman embedding would allow for the analysis and control of nonlinear systems using tools from linear systems theory. Many recent methods have been proposed for data-driven learning of such Koopman embeddings, but most of these methods do not consider the stability of the Koopman model. Stability is an important and desirable property for models of dynamical systems. Unstable models tend to be non-robust to input perturbations and can produce unbounded outputs, which are both undesirable when the model is used for prediction and control. In addition, recent work has shown that stability guarantees may act as a regularizer for model fitting. As such, a natural direction would be to construct Koopman models with inherent stability guarantees. Two new classes of Koopman models are proposed that bridge the gap between Koopman-based methods and learning stable nonlinear models. The first model class is guaranteed to be stable, while the second is guaranteed to be stabilizable with an explicit stabilizing controller that renders the model stable in closed-loop. Furthermore, these models are unconstrained in their parameter sets, thereby enabling efficient optimization via gradient-based methods. Theoretical connections between the stability of Koopman models and forms of nonlinear stability such as contraction are established. To demonstrate the effect of the stability guarantees, the stable Koopman model is applied to a system identification problem, while the stabilizable model is applied to an imitation learning problem. Experimental results show empirically that the proposed models achieve better performance over prior methods without stability guarantees

    Learning over All Stabilizing Nonlinear Controllers for a Partially-Observed Linear System

    Full text link
    This paper proposes a nonlinear policy architecture for control of partially-observed linear dynamical systems providing built-in closed-loop stability guarantees. The policy is based on a nonlinear version of the Youla parameterization, and augments a known stabilizing linear controller with a nonlinear operator from a recently developed class of dynamic neural network models called the recurrent equilibrium network (REN). We prove that RENs are universal approximators of contracting and Lipschitz nonlinear systems, and subsequently show that the the proposed Youla-REN architecture is a universal approximator of stabilizing nonlinear controllers. The REN architecture simplifies learning since unconstrained optimization can be applied, and we consider both a model-based case where exact gradients are available and reinforcement learning using random search with zeroth-order oracles. In simulation examples our method converges faster to better controllers and is more scalable than existing methods, while guaranteeing stability during learning transients

    Recurrent Equilibrium Networks: Flexible Dynamic Models with Guaranteed Stability and Robustness

    Full text link
    This paper introduces recurrent equilibrium networks (RENs), a new class of nonlinear dynamical models for applications in machine learning, system identification and control. The new model class has ``built in'' guarantees of stability and robustness: all models in the class are contracting - a strong form of nonlinear stability - and models can satisfy prescribed incremental integral quadratic constraints (IQC), including Lipschitz bounds and incremental passivity. RENs are otherwise very flexible: they can represent all stable linear systems, all previously-known sets of contracting recurrent neural networks and echo state networks, all deep feedforward neural networks, and all stable Wiener/Hammerstein models. RENs are parameterized directly by a vector in R^N, i.e. stability and robustness are ensured without parameter constraints, which simplifies learning since generic methods for unconstrained optimization can be used. The performance and robustness of the new model set is evaluated on benchmark nonlinear system identification problems, and the paper also presents applications in data-driven nonlinear observer design and control with stability guarantees.Comment: Journal submission, extended version of conference paper (v1 of this arxiv preprint

    Learning Over All Contracting and Lipschitz Closed-Loops for Partially-Observed Nonlinear Systems

    Full text link
    This paper presents a policy parameterization for learning-based control on nonlinear, partially-observed dynamical systems. The parameterization is based on a nonlinear version of the Youla parameterization and the recently proposed Recurrent Equilibrium Network (REN) class of models. We prove that the resulting Youla-REN parameterization automatically satisfies stability (contraction) and user-tunable robustness (Lipschitz) conditions on the closed-loop system. This means it can be used for safe learning-based control with no additional constraints or projections required to enforce stability or robustness. We test the new policy class in simulation on two reinforcement learning tasks: 1) magnetic suspension, and 2) inverting a rotary-arm pendulum. We find that the Youla-REN performs similarly to existing learning-based and optimal control methods while also ensuring stability and exhibiting improved robustness to adversarial disturbances

    Optimal control of non-stationary differential linear repetitive processes

    No full text
    Differential repetitive processes are a distinct class of continuousdiscrete 2D linear systems of both systems theoretic and applications interest. The feature which makes them distinct from other classes of such systems is the fact that information propagation in one of the two independent directions only occurs over a finite interval. Applications areas include iterative learning control and iterative solution algorithms for classes of dynamic nonlinear optimal control problems based on the maximum principle, and the modelling of numerous industrial processes such as metal rolling, and long-wall cutting etc. The new results in is paper solve a general optimal problem in the presence of non-stationary dynamics
    • …
    corecore