291 research outputs found

    A tractable approximation of chance constrained stochastic MPC based on affine disturbance feedback

    Full text link

    A tractable approximation of chance constrained stochastic MPC based on affine disturbance feedback

    Get PDF
    This paper deals with model predictive control of uncertain linear discrete-time systems with polytopic constraints on the input and chance constraints on the states. When having polytopic constraints and bounded disturbances, the robust problem with an open-loop prediction formulation is known to be conservative. Recently, a tractable closed-loop prediction formulation was introduced, which can reduce the conservatism of the robust problem. We show that in the presence of chance constraints and stochastic disturbances, this closed-loop formulation can be used together with a tractable approximation of the chance constraints to further increase the performance while satisfying the chance constraints with the predefined probability

    Stability for Receding-horizon Stochastic Model Predictive Control

    Full text link
    A stochastic model predictive control (SMPC) approach is presented for discrete-time linear systems with arbitrary time-invariant probabilistic uncertainties and additive Gaussian process noise. Closed-loop stability of the SMPC approach is established by appropriate selection of the cost function. Polynomial chaos is used for uncertainty propagation through system dynamics. The performance of the SMPC approach is demonstrated using the Van de Vusse reactions.Comment: American Control Conference (ACC) 201

    Stochastic Model Predictive Control Using Simplified Affine Disturbance Feedback for Chance-Constrained Systems

    Get PDF
    This letter covers the model predictive control of linear discrete-time systems subject to stochastic additive disturbances and chance constraints on their state and control input. We propose a simplified control parameterization under the framework of affine disturbance feedback, and we show that our method is equivalent to parameterization over the family of state feedback policies. Using our method, associated finite-horizon optimization can be computed efficiently, with a slight increase in conservativeness compared with conventional affine disturbance feedback parameterization

    An Improved Constraint-Tightening Approach for Stochastic MPC

    Full text link
    The problem of achieving a good trade-off in Stochastic Model Predictive Control between the competing goals of improving the average performance and reducing conservativeness, while still guaranteeing recursive feasibility and low computational complexity, is addressed. We propose a novel, less restrictive scheme which is based on considering stability and recursive feasibility separately. Through an explicit first step constraint we guarantee recursive feasibility. In particular we guarantee the existence of a feasible input trajectory at each time instant, but we only require that the input sequence computed at time kk remains feasible at time k+1k+1 for most disturbances but not necessarily for all, which suffices for stability. To overcome the computational complexity of probabilistic constraints, we propose an offline constraint-tightening procedure, which can be efficiently solved via a sampling approach to the desired accuracy. The online computational complexity of the resulting Model Predictive Control (MPC) algorithm is similar to that of a nominal MPC with terminal region. A numerical example, which provides a comparison with classical, recursively feasible Stochastic MPC and Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201

    On Stochastic Model Predictive Control with Bounded Control Inputs

    Full text link
    This paper is concerned with the problem of Model Predictive Control and Rolling Horizon Control of discrete-time systems subject to possibly unbounded random noise inputs, while satisfying hard bounds on the control inputs. We use a nonlinear feedback policy with respect to noise measurements and show that the resulting mathematical program has a tractable convex solution in both cases. Moreover, under the assumption that the zero-input and zero-noise system is asymptotically stable, we show that the variance of the state, under the resulting Model Predictive Control and Rolling Horizon Control policies, is bounded. Finally, we provide some numerical examples on how certain matrices in the underlying mathematical program can be calculated off-line.Comment: 8 page
    • 

    corecore