924 research outputs found

    Computational burden reduction in Min-Max MPC

    Get PDF
    Min–max model predictive control (MMMPC) is one of the strategies used to control plants subject to bounded uncertainties. The implementation of MMMPC suffers a large computational burden due to the complex numerical optimization problem that has to be solved at every sampling time. This paper shows how to overcome this by transforming the original problem into a reduced min–max problem whose solution is much simpler. In this way, the range of processes to which MMMPC can be applied is considerably broadened. Proofs based on the properties of the cost function and simulation examples are given in the paper

    Min–max MPC using a tractable QP problem

    Get PDF
    Min–max model predictive controllers (MMMPC) suffer from a great computational burden that is often circumvented by using approximate solutions or upper bounds of the worst possible case of a performance index. This paper proposes a computationally efficient MMMPC control strategy in which a close approximation of the solution of the min–max problem is computed using a quadratic programming problem. The overall computational burden is much lower than that of the min–max problem and the resulting control is shown to have a guaranteed stability. A simulation example is given in the paper

    Min-Max Predictive Control of a Pilot Plant using a QP Approach

    Get PDF
    47th IEEE Conference on Decision and Control 9-11 Dec. 2008The practical implementation of min-max MPC (MMMPC) controllers is limited by the computational burden required to compute the control law. This problem can be circumvented by using approximate solutions or upper bounds of the worst possible case of the performance index. In a previous work, the authors presented a computationally efficient MMMPC control strategy in which a close approximation of the solution of the min-max problem is computed using a quadratic programming problem. In this paper, this approach is validated through its application to a pilot plant in which the temperature of a reactor is controlled. The behavior of the system and the controller are illustrated by means of experimental results

    Min-Max MPC based on a computationally efficient upper bound of the worst case cost

    Get PDF
    Min-Max MPC (MMMPC) controllers [P.J. Campo, M. Morari, Robust model predictive control, in: Proc. American Control Conference, June 10–12, 1987, pp. 1021–1026] suffer from a great computational burden which limits their applicability in the industry. Sometimes upper bounds of the worst possible case of a performance index have been used to reduce the computational burden. This paper proposes a computationally efficient MMMPC control strategy in which the worst case cost is approximated by an upper bound based on a diagonalization scheme. The upper bound can be computed with O(n3) operations and using only simple matrix operations. This implies that the algorithm can be coded easily even in non-mathematical oriented programming languages such as those found in industrial embedded control hardware. A simulation example is given in the paper

    Performance-oriented model learning for data-driven MPC design

    Get PDF
    Model Predictive Control (MPC) is an enabling technology in applications requiring controlling physical processes in an optimized way under constraints on inputs and outputs. However, in MPC closed-loop performance is pushed to the limits only if the plant under control is accurately modeled; otherwise, robust architectures need to be employed, at the price of reduced performance due to worst-case conservative assumptions. In this paper, instead of adapting the controller to handle uncertainty, we adapt the learning procedure so that the prediction model is selected to provide the best closed-loop performance. More specifically, we apply for the first time the above "identification for control" rationale to hierarchical MPC using data-driven methods and Bayesian optimization.Comment: Accepted for publication in the IEEE Control Systems Letters (L-CSS

    A review of convex approaches for control, observation and safety of linear parameter varying and Takagi-Sugeno systems

    Get PDF
    This paper provides a review about the concept of convex systems based on Takagi-Sugeno, linear parameter varying (LPV) and quasi-LPV modeling. These paradigms are capable of hiding the nonlinearities by means of an equivalent description which uses a set of linear models interpolated by appropriately defined weighing functions. Convex systems have become very popular since they allow applying extended linear techniques based on linear matrix inequalities (LMIs) to complex nonlinear systems. This survey aims at providing the reader with a significant overview of the existing LMI-based techniques for convex systems in the fields of control, observation and safety. Firstly, a detailed review of stability, feedback, tracking and model predictive control (MPC) convex controllers is considered. Secondly, the problem of state estimation is addressed through the design of proportional, proportional-integral, unknown input and descriptor observers. Finally, safety of convex systems is discussed by describing popular techniques for fault diagnosis and fault tolerant control (FTC).Peer ReviewedPostprint (published version

    Sparse and Constrained Stochastic Predictive Control for Networked Systems

    Full text link
    This article presents a novel class of control policies for networked control of Lyapunov-stable linear systems with bounded inputs. The control channel is assumed to have i.i.d. Bernoulli packet dropouts and the system is assumed to be affected by additive stochastic noise. Our proposed class of policies is affine in the past dropouts and saturated values of the past disturbances. We further consider a regularization term in a quadratic performance index to promote sparsity in control. We demonstrate how to augment the underlying optimization problem with a constant negative drift constraint to ensure mean-square boundedness of the closed-loop states, yielding a convex quadratic program to be solved periodically online. The states of the closed-loop plant under the receding horizon implementation of the proposed class of policies are mean square bounded for any positive bound on the control and any non-zero probability of successful transmission

    Explicit feedback synthesis for nonlinear robust model predictive control driven by quasi-interpolation

    Full text link
    We present QuIFS (Quasi-Interpolation driven Feedback Synthesis): an offline feedback synthesis algorithm for explicit nonlinear robust minmax model predictive control (MPC) problems with guaranteed quality of approximation. The underlying technique is driven by a particular type of grid-based quasi-interpolation scheme. The QuIFS algorithm departs drastically from conventional approximation algorithms that are employed in the MPC industry (in particular, it is neither based on multi-parametric programming tools and nor does it involve kernel methods), and the essence of its point of departure is encoded in the following challenge-answer approach: Given an error margin ε>0\varepsilon>0, compute in a single stroke a feasible feedback policy that is uniformly ε\varepsilon-close to the optimal MPC feedback policy for a given nonlinear system subjected to constraints and bounded uncertainties. Closed-loop stability and recursive feasibility under the approximate feedback policy are also established. We provide a library of numerical examples to illustrate our results.Comment: 31 Page
    • …
    corecore