4,468 research outputs found
Stochastic Model Predictive Control with Dynamic Chance Constraints
In this work, we introduce a stochastic model predictive control scheme for
dynamic chance constraints. We consider linear discrete-time systems affected
by unbounded additive stochastic disturbance and subject to chance constraints
that are defined by time-varying probabilities with a common, fixed lower
bound. By utilizing probabilistic reachable tubes with dynamic cross-sections,
we are reformulating the stochastic optimization problem into a deterministic
tube-based MPC problem with time-varying tightened constraints. We show that
the resulting deterministic MPC formulation with dynamic tightened constraints
is recursively feasible and that the closed-loop stochastic system will satisfy
the corresponding dynamic chance constraints. In addition, we will also
introduce a novel implementation using zonotopes to describe the tightening
analytically. Finally, we will end with an example to illustrate the benefits
of the developed approach to stochastic MPC with dynamic chance constraints.Comment: 8 pages, 3 figure
An Improved Constraint-Tightening Approach for Stochastic MPC
The problem of achieving a good trade-off in Stochastic Model Predictive
Control between the competing goals of improving the average performance and
reducing conservativeness, while still guaranteeing recursive feasibility and
low computational complexity, is addressed. We propose a novel, less
restrictive scheme which is based on considering stability and recursive
feasibility separately. Through an explicit first step constraint we guarantee
recursive feasibility. In particular we guarantee the existence of a feasible
input trajectory at each time instant, but we only require that the input
sequence computed at time remains feasible at time for most
disturbances but not necessarily for all, which suffices for stability. To
overcome the computational complexity of probabilistic constraints, we propose
an offline constraint-tightening procedure, which can be efficiently solved via
a sampling approach to the desired accuracy. The online computational
complexity of the resulting Model Predictive Control (MPC) algorithm is similar
to that of a nominal MPC with terminal region. A numerical example, which
provides a comparison with classical, recursively feasible Stochastic MPC and
Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201
Robust Model Predictive Control via Scenario Optimization
This paper discusses a novel probabilistic approach for the design of robust
model predictive control (MPC) laws for discrete-time linear systems affected
by parametric uncertainty and additive disturbances. The proposed technique is
based on the iterated solution, at each step, of a finite-horizon optimal
control problem (FHOCP) that takes into account a suitable number of randomly
extracted scenarios of uncertainty and disturbances, followed by a specific
command selection rule implemented in a receding horizon fashion. The scenario
FHOCP is always convex, also when the uncertain parameters and disturbance
belong to non-convex sets, and irrespective of how the model uncertainty
influences the system's matrices. Moreover, the computational complexity of the
proposed approach does not depend on the uncertainty/disturbance dimensions,
and scales quadratically with the control horizon. The main result in this
paper is related to the analysis of the closed loop system under
receding-horizon implementation of the scenario FHOCP, and essentially states
that the devised control law guarantees constraint satisfaction at each step
with some a-priori assigned probability p, while the system's state reaches the
target set either asymptotically, or in finite time with probability at least
p. The proposed method may be a valid alternative when other existing
techniques, either deterministic or stochastic, are not directly usable due to
excessive conservatism or to numerical intractability caused by lack of
convexity of the robust or chance-constrained optimization problem.Comment: This manuscript is a preprint of a paper accepted for publication in
the IEEE Transactions on Automatic Control, with DOI:
10.1109/TAC.2012.2203054, and is subject to IEEE copyright. The copy of
record will be available at http://ieeexplore.ieee.or
Stochastic Model Predictive Control with Discounted Probabilistic Constraints
This paper considers linear discrete-time systems with additive disturbances,
and designs a Model Predictive Control (MPC) law to minimise a quadratic cost
function subject to a chance constraint. The chance constraint is defined as a
discounted sum of violation probabilities on an infinite horizon. By penalising
violation probabilities close to the initial time and ignoring violation
probabilities in the far future, this form of constraint enables the
feasibility of the online optimisation to be guaranteed without an assumption
of boundedness of the disturbance. A computationally convenient MPC
optimisation problem is formulated using Chebyshev's inequality and we
introduce an online constraint-tightening technique to ensure recursive
feasibility based on knowledge of a suboptimal solution. The closed loop system
is guaranteed to satisfy the chance constraint and a quadratic stability
condition.Comment: 6 pages, Conference Proceeding
On the convergence of stochastic MPC to terminal modes of operation
The stability of stochastic Model Predictive Control (MPC) subject to
additive disturbances is often demonstrated in the literature by constructing
Lyapunov-like inequalities that guarantee closed-loop performance bounds and
boundedness of the state, but convergence to a terminal control law is
typically not shown. In this work we use results on general state space Markov
chains to find conditions that guarantee convergence of disturbed nonlinear
systems to terminal modes of operation, so that they converge in probability to
a priori known terminal linear feedback laws and achieve time-average
performance equal to that of the terminal control law. We discuss implications
for the convergence of control laws in stochastic MPC formulations, in
particular we prove convergence for two formulations of stochastic MPC
Convex Chance Constrained Model Predictive Control
We consider the Chance Constrained Model Predictive Control problem for
polynomial systems subject to disturbances. In this problem, we aim at finding
optimal control input for given disturbed dynamical system to minimize a given
cost function subject to probabilistic constraints, over a finite horizon. The
control laws provided have a predefined (low) risk of not reaching the desired
target set. Building on the theory of measures and moments, a sequence of
finite semidefinite programmings are provided, whose solution is shown to
converge to the optimal solution of the original problem. Numerical examples
are presented to illustrate the computational performance of the proposed
approach.Comment: This work has been submitted to the 55th IEEE Conference on Decision
and Contro
- …