3,831 research outputs found
Learning a feasible and stabilizing explicit model predictive control law by robust optimization
Fast model predictive control on embedded sys- tems has been successfully applied to plants with microsecond sampling times employing a precomputed state-to-input map. However, the complexity of this so-called explicit MPC can be prohibitive even for low-dimensional systems. In this pa- per, we introduce a new synthesis method for low-complexity suboptimal MPC controllers based on function approximation from randomly chosen point-wise sample values. In addition to standard machine learning algorithms formulated as convex programs, we provide sufficient conditions on the learning algo- rithm in the form of tractable convex constraints that guarantee input and state constraint satisfaction, recursive feasibility and stability of the closed loop system. The resulting control law can be fully parallelized, which renders the approach particularly suitable for highly concurrent embedded platforms such as FPGAs. A numerical example shows the effectiveness of the proposed method
Learning an Approximate Model Predictive Controller with Guarantees
A supervised learning framework is proposed to approximate a model predictive
controller (MPC) with reduced computational complexity and guarantees on
stability and constraint satisfaction. The framework can be used for a wide
class of nonlinear systems. Any standard supervised learning technique (e.g.
neural networks) can be employed to approximate the MPC from samples. In order
to obtain closed-loop guarantees for the learned MPC, a robust MPC design is
combined with statistical learning bounds. The MPC design ensures robustness to
inaccurate inputs within given bounds, and Hoeffding's Inequality is used to
validate that the learned MPC satisfies these bounds with high confidence. The
result is a closed-loop statistical guarantee on stability and constraint
satisfaction for the learned MPC. The proposed learning-based MPC framework is
illustrated on a nonlinear benchmark problem, for which we learn a neural
network controller with guarantees.Comment: 6 pages, 3 figures, to appear in IEEE Control Systems Letter
Model predictive control techniques for hybrid systems
This paper describes the main issues encountered when applying model predictive control to hybrid processes. Hybrid model predictive control (HMPC) is a research field non-fully developed with many open challenges. The paper describes some of the techniques proposed by the research community to overcome the main problems encountered. Issues related to the stability and the solution of the optimization problem are also discussed. The paper ends by describing the results of a benchmark exercise in which several HMPC schemes were applied to a solar air conditioning plant.Ministerio de Eduación y Ciencia DPI2007-66718-C04-01Ministerio de Eduación y Ciencia DPI2008-0581
Fast Non-Parametric Learning to Accelerate Mixed-Integer Programming for Online Hybrid Model Predictive Control
Today's fast linear algebra and numerical optimization tools have pushed the
frontier of model predictive control (MPC) forward, to the efficient control of
highly nonlinear and hybrid systems. The field of hybrid MPC has demonstrated
that exact optimal control law can be computed, e.g., by mixed-integer
programming (MIP) under piecewise-affine (PWA) system models. Despite the
elegant theory, online solving hybrid MPC is still out of reach for many
applications. We aim to speed up MIP by combining geometric insights from
hybrid MPC, a simple-yet-effective learning algorithm, and MIP warm start
techniques. Following a line of work in approximate explicit MPC, the proposed
learning-control algorithm, LNMS, gains computational advantage over MIP at
little cost and is straightforward for practitioners to implement
Reliably-stabilizing piecewise-affine neural network controllers
A common problem affecting neural network (NN) approximations of model
predictive control (MPC) policies is the lack of analytical tools to assess the
stability of the closed-loop system under the action of the NN-based
controller. We present a general procedure to quantify the performance of such
a controller, or to design minimum complexity NNs with rectified linear units
(ReLUs) that preserve the desirable properties of a given MPC scheme. By
quantifying the approximation error between NN-based and MPC-based
state-to-input mappings, we first establish suitable conditions involving two
key quantities, the worst-case error and the Lipschitz constant, guaranteeing
the stability of the closed-loop system. We then develop an offline,
mixed-integer optimization-based method to compute those quantities exactly.
Together these techniques provide conditions sufficient to certify the
stability and performance of a ReLU-based approximation of an MPC control law
- …