1,715 research outputs found

    On the convergence of stochastic MPC to terminal modes of operation

    Full text link
    The stability of stochastic Model Predictive Control (MPC) subject to additive disturbances is often demonstrated in the literature by constructing Lyapunov-like inequalities that guarantee closed-loop performance bounds and boundedness of the state, but convergence to a terminal control law is typically not shown. In this work we use results on general state space Markov chains to find conditions that guarantee convergence of disturbed nonlinear systems to terminal modes of operation, so that they converge in probability to a priori known terminal linear feedback laws and achieve time-average performance equal to that of the terminal control law. We discuss implications for the convergence of control laws in stochastic MPC formulations, in particular we prove convergence for two formulations of stochastic MPC

    Robust Constrained Model Predictive Control using Linear Matrix Inequalities

    Get PDF
    The primary disadvantage of current design techniques for model predictive control (MPC) is their inability to deal explicitly with plant model uncertainty. In this paper, we present a new approach for robust MPC synthesis which allows explicit incorporation of the description of plant uncertainty in the problem formulation. The uncertainty is expressed both in the time domain and the frequency domain. The goal is to design, at each time step, a state-feedback control law which minimizes a "worst-case" infinite horizon objective function, subject to constraints on the control input and plant output. Using standard techniques, the problem of minimizing an upper bound on the "worst-case" objective function, subject to input and output constraints, is reduced to a convex optimization involving linear matrix inequalities (LMIs). It is shown that the feasible receding horizon state-feedback control design robustly stabilizes the set of uncertain plants under consideration. Several extensions, such as application to systems with time-delays and problems involving constant set-point tracking, trajectory tracking and disturbance rejection, which follow naturally from our formulation, are discussed. The controller design procedure is illustrated with two examples. Finally, conclusions are presented

    A New Contraction-Based NMPC Formulation Without Stability-Related terminal Constraints

    Full text link
    Contraction-Based Nonlinear Model Predictive Control (NMPC) formulations are attractive because of the generally short prediction horizons they require and the needless use of terminal set computation that are commonly necessary to guarantee stability. However, the inclusion of the contraction constraint in the definition of the underlying optimization problem often leads to non standard features such as the need for multi-step open-loop application of control sequences or the use of multi-step memorization of the contraction level that may induce unfeasibility in presence of unexpected disturbance. This paper proposes a new formulation of contraction-based NMPC in which no contraction constraint is explicitly involved. Convergence of the resulting closed-loop behavior is proved under mild assumptions.Comment: accepted in short version IFAC Nolcos 2016. submitted to Automatica as a technical communiqu

    Stability for Receding-horizon Stochastic Model Predictive Control

    Full text link
    A stochastic model predictive control (SMPC) approach is presented for discrete-time linear systems with arbitrary time-invariant probabilistic uncertainties and additive Gaussian process noise. Closed-loop stability of the SMPC approach is established by appropriate selection of the cost function. Polynomial chaos is used for uncertainty propagation through system dynamics. The performance of the SMPC approach is demonstrated using the Van de Vusse reactions.Comment: American Control Conference (ACC) 201
    corecore