663 research outputs found
Robust Constrained Model Predictive Control using Linear Matrix Inequalities
The primary disadvantage of current design techniques for model predictive control (MPC) is their inability to deal explicitly with plant model uncertainty. In this paper, we present a new approach for robust MPC synthesis which allows explicit incorporation of the description of plant uncertainty in the problem formulation. The uncertainty is expressed both in the time domain and the frequency domain. The goal is to design, at each time step, a state-feedback control law which minimizes a "worst-case" infinite horizon objective function, subject to constraints on the control input and plant output. Using standard techniques, the problem of minimizing an upper bound on the "worst-case" objective function, subject to input and output constraints, is reduced to a convex optimization involving linear matrix inequalities (LMIs). It is shown that the feasible receding horizon state-feedback control design robustly stabilizes the set of uncertain plants under consideration. Several extensions, such as application to systems with time-delays and problems involving constant set-point tracking, trajectory tracking and disturbance rejection, which follow naturally from our formulation, are discussed. The controller design procedure is illustrated with two examples. Finally, conclusions are presented
Robust constrained model predictive control based on parameter-dependent Lyapunov functions
The problem of robust constrained model predictive control (MPC) of systems with polytopic uncertainties is considered in this paper. New sufficient conditions for the existence of parameter-dependent Lyapunov functions are proposed in terms of linear matrix inequalities (LMIs), which will reduce the conservativeness resulting from using a single Lyapunov function. At each sampling instant, the corresponding parameter-dependent Lyapunov function is an upper bound for a worst-case objective function, which can be minimized using the LMI convex optimization approach. Based on the solution of optimization at each sampling instant, the corresponding state feedback controller is designed, which can guarantee that the resulting closed-loop system is robustly asymptotically stable. In addition, the feedback controller will meet the specifications for systems with input or output constraints, for all admissible time-varying parameter uncertainties. Numerical examples are presented to demonstrate the effectiveness of the proposed techniques
A review of convex approaches for control, observation and safety of linear parameter varying and Takagi-Sugeno systems
This paper provides a review about the concept of convex systems based on Takagi-Sugeno, linear parameter varying (LPV) and quasi-LPV modeling. These paradigms are capable of hiding the nonlinearities by means of an equivalent description which uses a set of linear models interpolated by appropriately defined weighing functions. Convex systems have become very popular since they allow applying extended linear techniques based on linear matrix inequalities (LMIs) to complex nonlinear systems. This survey aims at providing the reader with a significant overview of the existing LMI-based techniques for convex systems in the fields of control, observation and safety. Firstly, a detailed review of stability, feedback, tracking and model predictive control (MPC) convex controllers is considered. Secondly, the problem of state estimation is addressed through the design of proportional, proportional-integral, unknown input and descriptor observers. Finally, safety of convex systems is discussed by describing popular techniques for fault diagnosis and fault tolerant control (FTC).Peer ReviewedPostprint (published version
Stochastic model predictive control for constrained networked control systems with random time delay
In this paper the continuous time stochastic constrained optimal control problem is formulated for the class of networked control systems assuming that time delays follow a discrete-time, finite Markov chain . Polytopic overapproximations of the system's trajectories are employed to produce a polyhedral inner approximation of the non-convex constraint set resulting from imposing the constraints in continuous time. The problem is cast in a Markov jump linear systems (MJLS) framework and a stochastic MPC controller is calculated explicitly, oine, coupling dynamic programming with parametric piecewise quadratic (PWQ) optimization. The calculated control law leads to stochastic stability of the closed loop system, in the mean square sense and respects the state and input constraints in continuous time
An Improved Constraint-Tightening Approach for Stochastic MPC
The problem of achieving a good trade-off in Stochastic Model Predictive
Control between the competing goals of improving the average performance and
reducing conservativeness, while still guaranteeing recursive feasibility and
low computational complexity, is addressed. We propose a novel, less
restrictive scheme which is based on considering stability and recursive
feasibility separately. Through an explicit first step constraint we guarantee
recursive feasibility. In particular we guarantee the existence of a feasible
input trajectory at each time instant, but we only require that the input
sequence computed at time remains feasible at time for most
disturbances but not necessarily for all, which suffices for stability. To
overcome the computational complexity of probabilistic constraints, we propose
an offline constraint-tightening procedure, which can be efficiently solved via
a sampling approach to the desired accuracy. The online computational
complexity of the resulting Model Predictive Control (MPC) algorithm is similar
to that of a nominal MPC with terminal region. A numerical example, which
provides a comparison with classical, recursively feasible Stochastic MPC and
Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201
- …