279 research outputs found

    Model Predictive Control of stochastic LPV Systems via Random Convex Programs

    Get PDF
    This paper considers the problem of stabilization of stochastic Linear Parameter Varying (LPV) discrete time systems in the presence of convex state and input constraints. By using a randomization approach, a convex finite horizon optimal control problem is derived, even when the dependence of the system's matrices on the time-varying parameters is nonlinear. This convex problem can be solved efficiently, and its solution is a-priori guaranteed to be probabilistically robust, up to a user-defined probability level p. Then, a novel receding horizon control strategy that involves, at each time step, the solution of a finite-horizon scenario-based control problem, is proposed. It is shown that the resulting closed loop scheme drives the state to a terminal set in finite time, either deterministically, or with probability no less than p. The features of the approach are shown through a numerical exampl

    Stochastic model predictive control of LPV systems via scenario optimization

    Get PDF
    A stochastic receding-horizon control approach for constrained Linear Parameter Varying discrete-time systems is proposed in this paper. It is assumed that the time-varying parameters have stochastic nature and that the system's matrices are bounded but otherwise arbitrary nonlinear functions of these parameters. No specific assumption on the statistics of the parameters is required. By using a randomization approach, a scenario-based finite-horizon optimal control problem is formulated, where only a finite number M of sampled predicted parameter trajectories (‘scenarios') are considered. This problem is convex and its solution is a priori guaranteed to be probabilistically robust, up to a user-defined probability level p. The p level is linked to M by an analytic relationship, which establishes a tradeoff between computational complexity and robustness of the solution. Then, a receding horizon strategy is presented, involving the iterated solution of a scenario-based finite-horizon control problem at each time step. Our key result is to show that the state trajectories of the controlled system reach a terminal positively invariant set in finite time, either deterministically, or with probability no smaller than p. The features of the approach are illustrated by a numerical example

    An Improved Constraint-Tightening Approach for Stochastic MPC

    Full text link
    The problem of achieving a good trade-off in Stochastic Model Predictive Control between the competing goals of improving the average performance and reducing conservativeness, while still guaranteeing recursive feasibility and low computational complexity, is addressed. We propose a novel, less restrictive scheme which is based on considering stability and recursive feasibility separately. Through an explicit first step constraint we guarantee recursive feasibility. In particular we guarantee the existence of a feasible input trajectory at each time instant, but we only require that the input sequence computed at time kk remains feasible at time k+1k+1 for most disturbances but not necessarily for all, which suffices for stability. To overcome the computational complexity of probabilistic constraints, we propose an offline constraint-tightening procedure, which can be efficiently solved via a sampling approach to the desired accuracy. The online computational complexity of the resulting Model Predictive Control (MPC) algorithm is similar to that of a nominal MPC with terminal region. A numerical example, which provides a comparison with classical, recursively feasible Stochastic MPC and Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201

    Stochastic MPC for additive and multiplicative uncertainty using sample approximations

    Get PDF
    © 2019 Institute of Electrical and Electronics Engineers Inc.. All rights reserved. We introduce an approach for model predictive control (MPC) of systems with additive and multiplicative stochastic uncertainty subject to chance constraints. Predicted states are bounded within a tube and the chance constraint is considered in a “one step ahead” manner, with robust constraints applied over the remainder of the horizon. The online optimization is formulated as a chance-constrained program that is solved approximately using sampling. We prove that if the optimization is initially feasible, it remains feasible and the closed-loop system is stable. Applying the chance-constraint only one step ahead allows us to state a confidence bound for satisfaction of the chance constraint in closed-loop. Finally, we demonstrate by example that the resulting controller is only mildly more conservative than scenario MPC approaches that have no feasibility guarantee
    • 

    corecore