658 research outputs found

    An Improved Constraint-Tightening Approach for Stochastic MPC

    Full text link
    The problem of achieving a good trade-off in Stochastic Model Predictive Control between the competing goals of improving the average performance and reducing conservativeness, while still guaranteeing recursive feasibility and low computational complexity, is addressed. We propose a novel, less restrictive scheme which is based on considering stability and recursive feasibility separately. Through an explicit first step constraint we guarantee recursive feasibility. In particular we guarantee the existence of a feasible input trajectory at each time instant, but we only require that the input sequence computed at time kk remains feasible at time k+1k+1 for most disturbances but not necessarily for all, which suffices for stability. To overcome the computational complexity of probabilistic constraints, we propose an offline constraint-tightening procedure, which can be efficiently solved via a sampling approach to the desired accuracy. The online computational complexity of the resulting Model Predictive Control (MPC) algorithm is similar to that of a nominal MPC with terminal region. A numerical example, which provides a comparison with classical, recursively feasible Stochastic MPC and Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201

    Stochastic Model Predictive Control with Discounted Probabilistic Constraints

    Full text link
    This paper considers linear discrete-time systems with additive disturbances, and designs a Model Predictive Control (MPC) law to minimise a quadratic cost function subject to a chance constraint. The chance constraint is defined as a discounted sum of violation probabilities on an infinite horizon. By penalising violation probabilities close to the initial time and ignoring violation probabilities in the far future, this form of constraint enables the feasibility of the online optimisation to be guaranteed without an assumption of boundedness of the disturbance. A computationally convenient MPC optimisation problem is formulated using Chebyshev's inequality and we introduce an online constraint-tightening technique to ensure recursive feasibility based on knowledge of a suboptimal solution. The closed loop system is guaranteed to satisfy the chance constraint and a quadratic stability condition.Comment: 6 pages, Conference Proceeding

    Stochastic Nonlinear Model Predictive Control with Efficient Sample Approximation of Chance Constraints

    Full text link
    This paper presents a stochastic model predictive control approach for nonlinear systems subject to time-invariant probabilistic uncertainties in model parameters and initial conditions. The stochastic optimal control problem entails a cost function in terms of expected values and higher moments of the states, and chance constraints that ensure probabilistic constraint satisfaction. The generalized polynomial chaos framework is used to propagate the time-invariant stochastic uncertainties through the nonlinear system dynamics, and to efficiently sample from the probability densities of the states to approximate the satisfaction probability of the chance constraints. To increase computational efficiency by avoiding excessive sampling, a statistical analysis is proposed to systematically determine a-priori the least conservative constraint tightening required at a given sample size to guarantee a desired feasibility probability of the sample-approximated chance constraint optimization problem. In addition, a method is presented for sample-based approximation of the analytic gradients of the chance constraints, which increases the optimization efficiency significantly. The proposed stochastic nonlinear model predictive control approach is applicable to a broad class of nonlinear systems with the sufficient condition that each term is analytic with respect to the states, and separable with respect to the inputs, states and parameters. The closed-loop performance of the proposed approach is evaluated using the Williams-Otto reactor with seven states, and ten uncertain parameters and initial conditions. The results demonstrate the efficiency of the approach for real-time stochastic model predictive control and its capability to systematically account for probabilistic uncertainties in contrast to a nonlinear model predictive control approaches.Comment: Submitted to Journal of Process Contro

    Cautious NMPC with Gaussian Process Dynamics for Autonomous Miniature Race Cars

    Full text link
    This paper presents an adaptive high performance control method for autonomous miniature race cars. Racing dynamics are notoriously hard to model from first principles, which is addressed by means of a cautious nonlinear model predictive control (NMPC) approach that learns to improve its dynamics model from data and safely increases racing performance. The approach makes use of a Gaussian Process (GP) and takes residual model uncertainty into account through a chance constrained formulation. We present a sparse GP approximation with dynamically adjusting inducing inputs, enabling a real-time implementable controller. The formulation is demonstrated in simulations, which show significant improvement with respect to both lap time and constraint satisfaction compared to an NMPC without model learning

    Stochastic Model Predictive Control for Linear Systems using Probabilistic Reachable Sets

    Full text link
    In this paper we propose a stochastic model predictive control (MPC) algorithm for linear discrete-time systems affected by possibly unbounded additive disturbances and subject to probabilistic constraints. Constraints are treated in analogy to robust MPC using a constraint tightening based on the concept of probabilistic reachable sets, which is shown to provide closed-loop fulfillment of chance constraints under a unimodality assumption on the disturbance distribution. A control scheme reverting to a backup solution from a previous time step in case of infeasibility is proposed, for which an asymptotic average performance bound is derived. Two examples illustrate the approach, highlighting closed-loop chance constraint satisfaction and the benefits of the proposed controller in the presence of unmodeled disturbances.Comment: 57th IEEE Conference on Decision and Control, 201
    corecore