6,063 research outputs found

    Connections Between Adaptive Control and Optimization in Machine Learning

    Full text link
    This paper demonstrates many immediate connections between adaptive control and optimization methods commonly employed in machine learning. Starting from common output error formulations, similarities in update law modifications are examined. Concepts in stability, performance, and learning, common to both fields are then discussed. Building on the similarities in update laws and common concepts, new intersections and opportunities for improved algorithm analysis are provided. In particular, a specific problem related to higher order learning is solved through insights obtained from these intersections.Comment: 18 page

    Direct Learning for Parameter-Varying Feedforward Control: A Neural-Network Approach

    Full text link
    The performance of a feedforward controller is primarily determined by the extent to which it can capture the relevant dynamics of a system. The aim of this paper is to develop an input-output linear parameter-varying (LPV) feedforward parameterization and a corresponding data-driven estimation method in which the dependency of the coefficients on the scheduling signal are learned by a neural network. The use of a neural network enables the parameterization to compensate a wide class of constant relative degree LPV systems. Efficient optimization of the neural-network-based controller is achieved through a Levenberg-Marquardt approach with analytic gradients and a pseudolinear approach generalizing Sanathanan-Koerner to the LPV case. The performance of the developed feedforward learning method is validated in a simulation study of an LPV system showing excellent performance.Comment: Final author version, accepted for publication at 62nd IEEE Conference on Decision and Control, Singapore, 202

    Learning Stable Koopman Models for Identification and Control of Dynamical Systems

    Get PDF
    Learning models of dynamical systems from data is a widely-studied problem in control theory and machine learning. One recent approach for modelling nonlinear systems considers the class of Koopman models, which embeds the nonlinear dynamics in a higher-dimensional linear subspace. Learning a Koopman embedding would allow for the analysis and control of nonlinear systems using tools from linear systems theory. Many recent methods have been proposed for data-driven learning of such Koopman embeddings, but most of these methods do not consider the stability of the Koopman model. Stability is an important and desirable property for models of dynamical systems. Unstable models tend to be non-robust to input perturbations and can produce unbounded outputs, which are both undesirable when the model is used for prediction and control. In addition, recent work has shown that stability guarantees may act as a regularizer for model fitting. As such, a natural direction would be to construct Koopman models with inherent stability guarantees. Two new classes of Koopman models are proposed that bridge the gap between Koopman-based methods and learning stable nonlinear models. The first model class is guaranteed to be stable, while the second is guaranteed to be stabilizable with an explicit stabilizing controller that renders the model stable in closed-loop. Furthermore, these models are unconstrained in their parameter sets, thereby enabling efficient optimization via gradient-based methods. Theoretical connections between the stability of Koopman models and forms of nonlinear stability such as contraction are established. To demonstrate the effect of the stability guarantees, the stable Koopman model is applied to a system identification problem, while the stabilizable model is applied to an imitation learning problem. Experimental results show empirically that the proposed models achieve better performance over prior methods without stability guarantees
    • …
    corecore