3,142 research outputs found

    Iterative learning control for constrained linear systems

    Get PDF
    This paper considers iterative learning control for linear systems with convex control input constraints. First, the constrained ILC problem is formulated in a novel successive projection framework. Then, based on this projection method, two algorithms are proposed to solve this constrained ILC problem. The results show that, when perfect tracking is possible, both algorithms can achieve perfect tracking. The two algorithms differ however in that one algorithm needs much less computation than the other. When perfect tracking is not possible, both algorithms can exhibit a form of practical convergence to a "best approximation". The effect of weighting matrices on the performance of the algorithms is also discussed and finally, numerical simulations are given to demonstrate the eĀ®ectiveness of the proposed methods

    Stochastic model predictive control of LPV systems via scenario optimization

    Get PDF
    A stochastic receding-horizon control approach for constrained Linear Parameter Varying discrete-time systems is proposed in this paper. It is assumed that the time-varying parameters have stochastic nature and that the system's matrices are bounded but otherwise arbitrary nonlinear functions of these parameters. No specific assumption on the statistics of the parameters is required. By using a randomization approach, a scenario-based finite-horizon optimal control problem is formulated, where only a finite number M of sampled predicted parameter trajectories (ā€˜scenarios') are considered. This problem is convex and its solution is a priori guaranteed to be probabilistically robust, up to a user-defined probability level p. The p level is linked to M by an analytic relationship, which establishes a tradeoff between computational complexity and robustness of the solution. Then, a receding horizon strategy is presented, involving the iterated solution of a scenario-based finite-horizon control problem at each time step. Our key result is to show that the state trajectories of the controlled system reach a terminal positively invariant set in finite time, either deterministically, or with probability no smaller than p. The features of the approach are illustrated by a numerical example

    Robustly stable feedback min-max model predictive control

    Get PDF
    Published versio

    Robust Constrained Model Predictive Control using Linear Matrix Inequalities

    Get PDF
    The primary disadvantage of current design techniques for model predictive control (MPC) is their inability to deal explicitly with plant model uncertainty. In this paper, we present a new approach for robust MPC synthesis which allows explicit incorporation of the description of plant uncertainty in the problem formulation. The uncertainty is expressed both in the time domain and the frequency domain. The goal is to design, at each time step, a state-feedback control law which minimizes a "worst-case" infinite horizon objective function, subject to constraints on the control input and plant output. Using standard techniques, the problem of minimizing an upper bound on the "worst-case" objective function, subject to input and output constraints, is reduced to a convex optimization involving linear matrix inequalities (LMIs). It is shown that the feasible receding horizon state-feedback control design robustly stabilizes the set of uncertain plants under consideration. Several extensions, such as application to systems with time-delays and problems involving constant set-point tracking, trajectory tracking and disturbance rejection, which follow naturally from our formulation, are discussed. The controller design procedure is illustrated with two examples. Finally, conclusions are presented

    Robust Model Predictive Control via Scenario Optimization

    Full text link
    This paper discusses a novel probabilistic approach for the design of robust model predictive control (MPC) laws for discrete-time linear systems affected by parametric uncertainty and additive disturbances. The proposed technique is based on the iterated solution, at each step, of a finite-horizon optimal control problem (FHOCP) that takes into account a suitable number of randomly extracted scenarios of uncertainty and disturbances, followed by a specific command selection rule implemented in a receding horizon fashion. The scenario FHOCP is always convex, also when the uncertain parameters and disturbance belong to non-convex sets, and irrespective of how the model uncertainty influences the system's matrices. Moreover, the computational complexity of the proposed approach does not depend on the uncertainty/disturbance dimensions, and scales quadratically with the control horizon. The main result in this paper is related to the analysis of the closed loop system under receding-horizon implementation of the scenario FHOCP, and essentially states that the devised control law guarantees constraint satisfaction at each step with some a-priori assigned probability p, while the system's state reaches the target set either asymptotically, or in finite time with probability at least p. The proposed method may be a valid alternative when other existing techniques, either deterministic or stochastic, are not directly usable due to excessive conservatism or to numerical intractability caused by lack of convexity of the robust or chance-constrained optimization problem.Comment: This manuscript is a preprint of a paper accepted for publication in the IEEE Transactions on Automatic Control, with DOI: 10.1109/TAC.2012.2203054, and is subject to IEEE copyright. The copy of record will be available at http://ieeexplore.ieee.or
    • ā€¦
    corecore