4 research outputs found

    Adaptive MPC for Autonomous Lane Keeping

    Full text link
    This paper proposes an Adaptive Robust Model Predictive Control strategy for lateral control in lane keeping problems, where we continuously learn an unknown, but constant steering angle offset present in the steering system. Longitudinal velocity is assumed constant. The goal is to minimize the outputs, which are distance from lane center line and the steady state heading angle error, while satisfying respective safety constraints. We do not assume perfect knowledge of the vehicle lateral dynamics model and estimate and adapt in real-time the maximum possible bound of the steering angle offset from data using a robust Set Membership Method based approach. Our approach is even well-suited for scenarios with sharp curvatures on high speed, where obtaining a precise model bias for constrained control is difficult, but learning from data can be helpful. We ensure persistent feasibility using a switching strategy during change of lane curvature. The proposed methodology is general and can be applied to more complex vehicle dynamics problems.Comment: 14th International Symposium on Advanced Vehicle Control (AVEC), Beijing, China, July 201

    Adaptive MPC for Iterative Tasks

    Full text link
    This paper proposes an Adaptive Learning Model Predictive Control strategy for uncertain constrained linear systems performing iterative tasks. The additive uncertainty is modeled as the sum of a bounded process noise and an unknown constant offset. As new data becomes available, the proposed algorithm iteratively adapts the believed domain of the unknown offset after each iteration. An MPC strategy robust to all feasible offsets is employed in order to guarantee recursive feasibility. We show that the adaptation of the feasible offset domain reduces conservatism of the proposed strategy, compared to classical robust MPC strategies. As a result, the controller performance improves. Performance is measured in terms of following trajectories with lower associated costs at each iteration. Numerical simulations highlight the main advantages of the proposed approach

    Safe and Near-Optimal Policy Learning for Model Predictive Control using Primal-Dual Neural Networks

    Full text link
    In this paper, we propose a novel framework for approximating the explicit MPC law for linear parameter-varying systems using supervised learning. In contrast to most existing approaches, we not only learn the control policy, but also a "certificate policy", that allows us to estimate the sub-optimality of the learned control policy online, during execution-time. We learn both these policies from data using supervised learning techniques, and also provide a randomized method that allows us to guarantee the quality of each learned policy, measured in terms of feasibility and optimality. This in turn allows us to bound the probability of the learned control policy of being infeasible or suboptimal, where the check is performed by the certificate policy. Since our algorithm does not require the solution of an optimization problem during run-time, it can be deployed even on resource-constrained systems. We illustrate the efficacy of the proposed framework on a vehicle dynamics control problem where we demonstrate a speedup of up to two orders of magnitude compared to online optimization with minimal performance degradation.Comment: IEEE American Control Conference (ACC) 2019, July 9-12, Philadelphia, PA, US

    Near-Optimal Rapid MPC using Neural Networks: A Primal-Dual Policy Learning Framework

    Full text link
    In this paper, we propose a novel framework for approximating the explicit MPC policy for linear parameter-varying systems using supervised learning. Our learning scheme guarantees feasibility and near-optimality of the approximated MPC policy with high probability. Furthermore, in contrast to most existing approaches that only learn the MPC policy, we also learn the "dual policy", which enables us to keep a check on the approximated MPC's optimality online during the control process. If the check deems the control input from the approximated MPC policy safe and near-optimal, then it is applied to the plant, otherwise a backup controller is invoked, thus filtering out (severely) suboptimal control inputs. The backup controller is only invoked with a bounded (low) probability, where the exact probability level can be chosen by the user. Since our framework does not require solving any optimization problem during the control process, it enables the deployment of MPC on resource-constrained systems. Specifically, we illustrate the utility of the proposed framework on a vehicle dynamics control problem. Compared to online optimization methods, we demonstrate a speedup of up to 62x on a desktop computer and 10x on an automotive-grade electronic control unit, while maintaining a high control performance.Comment: First two authors contributed equally. arXiv admin note: text overlap with arXiv:1906.0825
    corecore