3 research outputs found

    Practical Comparison of Optimization Algorithms for Learning-Based MPC with Linear Models

    Full text link
    Learning-based control methods are an attractive approach for addressing performance and efficiency challenges in robotics and automation systems. One such technique that has found application in these domains is learning-based model predictive control (LBMPC). An important novelty of LBMPC lies in the fact that its robustness and stability properties are independent of the type of online learning used. This allows the use of advanced statistical or machine learning methods to provide the adaptation for the controller. This paper is concerned with providing practical comparisons of different optimization algorithms for implementing the LBMPC method, for the special case where the dynamic model of the system is linear and the online learning provides linear updates to the dynamic model. For comparison purposes, we have implemented a primal-dual infeasible start interior point method that exploits the sparsity structure of LBMPC. Our open source implementation (called LBmpcIPM) is available through a BSD license and is provided freely to enable the rapid implementation of LBMPC on other platforms. This solver is compared to the dense active set solvers LSSOL and qpOASES using a quadrotor helicopter platform. Two scenarios are considered: The first is a simulation comparing hovering control for the quadrotor, and the second is on-board control experiments of dynamic quadrotor flight. Though the LBmpcIPM method has better asymptotic computational complexity than LSSOL and qpOASES, we find that for certain integrated systems (like our quadrotor testbed) these methods can outperform LBmpcIPM. This suggests that actual benchmarks should be used when choosing which algorithm is used to implement LBMPC on practical systems

    A Revised Mehrotra Predictor-Corrector algorithm for Model Predictive Control

    Full text link
    Input constrained Model predictive control (MPC) includes an optimization problem which should iteratively be solved at each time-instance. The well-known drawback of model predictive control is the computational cost of the optimization problem. This results in restriction of the application of MPC to systems with slow dynamics, e.g., process control systems and small-scale problems. Therefore, implementing fast numerical optimization algorithms has been a point of interest. Interior-point methods are proved to be appropriate algorithms, from computational cost point-of-vie, to solve input-constrained MPC. In this paper first a modified version of Mehrotra's predictor-corrector algorithm, a famous interior-point algorithm, is extended for quadratic programming problems and then is applied to the constrained model predictive control problems. Results show that as expected, the new algorithm is faster than Matlab solver's algorithm.Comment: 6 pages, 1 figur

    Learning Stable Adaptive Explicit Differentiable Predictive Control for Unknown Linear Systems

    Full text link
    We present differentiable predictive control (DPC), a method for learning constrained adaptive neural control policies and dynamical models of unknown linear systems. DPC presents an approximate data-driven solution approach to the explicit Model Predictive Control (MPC) problem as a scalable alternative to computationally expensive multiparametric programming solvers. DPC is formulated as a constrained deep learning problem whose architecture is inspired by the structure of classical MPC. The optimization of the neural control policy is based on automatic differentiation of the MPC-inspired loss function through a differentiable closed-loop system model. This novel solution approach can optimize adaptive neural control policies for time-varying references while obeying state and input constraints without the prior need of an MPC controller. We show that DPC can learn to stabilize constrained neural control policies for systems with unstable dynamics. Moreover, we provide sufficient conditions for asymptotic stability of generic closed-loop system dynamics with neural feedback policies. In simulation case studies, we assess the performance of the proposed DPC method in terms of reference tracking, robustness, and computational and memory footprints compared against classical model-based and data-driven control approaches. We demonstrate that DPC scales linearly with problem size, compared to exponential scalability of classical explicit MPC based on multiparametric programming.Comment: 11 pages. Code for reproducing our experiments is available at: https://github.com/pnnl/deps_arXiv2020
    corecore