390 research outputs found

    Enlarging the domain of attraction of MPC controllers

    Get PDF
    This paper presents a method for enlarging the domain of attraction of nonlinear model predictive control (MPC). The usual way of guaranteeing stability of nonlinear MPC is to add a terminal constraint and a terminal cost to the optimization problem such that the terminal region is a positively invariant set for the system and the terminal cost is an associated Lyapunov function. The domain of attraction of the controller depends on the size of the terminal region and the control horizon. By increasing the control horizon, the domain of attraction is enlarged but at the expense of a greater computational burden, while increasing the terminal region produces an enlargement without an extra cost. In this paper, the MPC formulation with terminal cost and constraint is modified, replacing the terminal constraint by a contractive terminal constraint. This constraint is given by a sequence of sets computed off-line that is based on the positively invariant set. Each set of this sequence does not need to be an invariant set and can be computed by a procedure which provides an inner approximation to the one-step set. This property allows us to use one-step approximations with a trade off between accuracy and computational burden for the computation of the sequence. This strategy guarantees closed loop-stability ensuring the enlargement of the domain of attraction and the local optimality of the controller. Moreover, this idea can be directly translated to robust MPC.Ministerio de Ciencia y Tecnología DPI2002-04375-c03-0

    Enlarging the domain of attraction of MPC controller using invariant sets

    Get PDF
    2002 IFAC15th Triennial World Congress, Barcelona, SpainThis paper presents a method for enlarging the domain of attraction of nonlinear model predictive control (MPC). The useful way of guaranteeing stability of nonlinear MPC is to add a terminal constraint and a terminal cost in the optimization problem. The terminal constraint is a positively invariant set for the system and the terminal cost is an associated Lyapunov function. The domain of attraction of the controller depends on the size of the terminal region and the prediction horizon. By increasing the prediction horizon, the domain of attraction is enlarged but at expense of a greater computational burden. A strategy to enlarge the domain of attraction of MPC without increasing the prediction horizon is presented. The terminal constraint is replaced by a contractive terminal constraint which is given by a sequence of control invariant sets for the system. This strategy guarantees closed loop stability under the same assumptions

    An Improved Constraint-Tightening Approach for Stochastic MPC

    Full text link
    The problem of achieving a good trade-off in Stochastic Model Predictive Control between the competing goals of improving the average performance and reducing conservativeness, while still guaranteeing recursive feasibility and low computational complexity, is addressed. We propose a novel, less restrictive scheme which is based on considering stability and recursive feasibility separately. Through an explicit first step constraint we guarantee recursive feasibility. In particular we guarantee the existence of a feasible input trajectory at each time instant, but we only require that the input sequence computed at time kk remains feasible at time k+1k+1 for most disturbances but not necessarily for all, which suffices for stability. To overcome the computational complexity of probabilistic constraints, we propose an offline constraint-tightening procedure, which can be efficiently solved via a sampling approach to the desired accuracy. The online computational complexity of the resulting Model Predictive Control (MPC) algorithm is similar to that of a nominal MPC with terminal region. A numerical example, which provides a comparison with classical, recursively feasible Stochastic MPC and Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201

    Improved MPC Design based on Saturating Control Laws

    Get PDF
    This paper is concerned with the design of stabilizing model predictive control (MPC) laws for constrained linear systems. This is achieved by obtaining a suitable terminal cost and terminal constraint using a saturating control law as local controller. The system controlled by the saturating control law is modelled by a linear difference inclusion. Based on this, it is shown how to determine a Lyapunov function and a polyhedral invariant set which can be used as terminal cost and constraint. The obtained invariant set is potentially larger than the maximal invariant set for the unsaturated linear controller, O∞. Furthermore, considering these elements, a simple dual MPC strategy is proposed. This dual-mode controller guarantees the enlargement of the domain of attraction or, equivalently, the reduction of the prediction horizon for a given initial state. If the local control law is the saturating linear quadratic regulator (LQR) controller, then the proposed dual-mode MPC controller retains the local infinite-horizon optimality. Finally, an illustrative example is given

    Dual mode nonlinear model based predictive control with guaranteed stability

    Get PDF
    Resumen En este artículo se propone un Control Predictivo Basado en un Modelo no Lineal (NMPC) pero que opera por modo dual y con estabilidad garantizada. El modo dual se logra usando un controlador PI dentro de la región terminal del NMPC. Para el cálculo de la región terminal (Ω) y del dominio de atracción se usa la teoría de conjuntos y un algoritmo aleatorizado tipo Montecarlo. En la estrategia de conmutación NMPC al PI se restringen los elementos finales de control suavizando la conmutación. El NMPC por Modo Dual propuesto se implementa en simulación en un Reactor Continuamente Agitado, comparando su desempeño con el de un NMPC multivariable convencional y dos controladores PI. Se concluye que el NMPC por modo dual propuesto es la estrategia de control que además de tener estabilidad garantizada, presenta mejor desempeño. Abstract: This paper proposes a Nonlinear Model based Predictive Control (NMPC), with guaranteed stability using a dual mode Model based Predictive Control approach with a PI controller inside a terminal region. Within the formulation of this control strategy, a terminal region (Ω) and an attraction domain are calculated using invariant sets theory and a randomized Montecarlo type algorithm. In addition, this proposal is complemented with a commutation strategy to constrain final control elements smoothing controllers commutation. This Dual Mode NMPC multivariable control is implemented by simulation over a Continuous Stirred Reactor Tank and comparing Dual Mode NMPC with a conventional NMPC multivariable and with two PI controllers. Finally, this article concludes that the NMPC for dual mode is the control strategy that in addition to having stability guaranteed, presents a better performance

    Stability of Constrained Adaptive Model Predictive Control Algorithms

    Get PDF
    Recently, suboptimality estimates for model predictive controllers (MPC) have been derived for the case without additional stabilizing endpoint constraints or a Lyapunov function type endpoint weight. The proposed methods yield a posteriori and a priori estimates of the degree of suboptimality with respect to the infinite horizon optimal control and can be evaluated at runtime of the MPC algorithm. Our aim is to design automatic adaptation strategies of the optimization horizon in order to guarantee stability and a predefined degree of suboptimality for the closed loop solution. Here, we present a stability proof for an arbitrary adaptation scheme and state a simple shortening and prolongation strategy which can be used for adapting the optimization horizon.Comment: 6 pages, 2 figure

    Stochastic MPC with Dynamic Feedback Gain Selection and Discounted Probabilistic Constraints

    Full text link
    This paper considers linear discrete-time systems with additive disturbances, and designs a Model Predictive Control (MPC) law incorporating a dynamic feedback gain to minimise a quadratic cost function subject to a single chance constraint. The feedback gain is selected from a set of candidates generated by solutions of multiobjective optimisation problems solved by Dynamic Programming (DP). We provide two methods for gain selection based on minimising upper bounds on predicted costs. The chance constraint is defined as a discounted sum of violation probabilities on an infinite horizon. By penalising violation probabilities close to the initial time and ignoring violation probabilities in the far future, this form of constraint allows for an MPC law with guarantees of recursive feasibility without an assumption of boundedness of the disturbance. A computationally convenient MPC optimisation problem is formulated using Chebyshev's inequality and we introduce an online constraint-tightening technique to ensure recursive feasibility. The closed loop system is guaranteed to satisfy the chance constraint and a quadratic stability condition. With dynamic feedback gain selection, the conservativeness of Chebyshev's inequality is mitigated and closed loop cost is reduced with a larger set of feasible initial conditions. A numerical example is given to show these properties.Comment: 14 page

    Generalized Stabilizing Conditions for Model Predictive Control

    Get PDF
    © 2015, The Author(s).This note addresses the tracking problem for model predictive control. It presents simple procedures for both linear and nonlinear constrained model predictive control when the desired equilibrium state is any point in a specified set. The resultant region of attraction is the union of the regions of attraction for each equilibrium state in the specified set and is therefore larger than that obtained when conventional model predictive control is employed
    corecore