1,191 research outputs found
Stability for Receding-horizon Stochastic Model Predictive Control
A stochastic model predictive control (SMPC) approach is presented for
discrete-time linear systems with arbitrary time-invariant probabilistic
uncertainties and additive Gaussian process noise. Closed-loop stability of the
SMPC approach is established by appropriate selection of the cost function.
Polynomial chaos is used for uncertainty propagation through system dynamics.
The performance of the SMPC approach is demonstrated using the Van de Vusse
reactions.Comment: American Control Conference (ACC) 201
An Improved Constraint-Tightening Approach for Stochastic MPC
The problem of achieving a good trade-off in Stochastic Model Predictive
Control between the competing goals of improving the average performance and
reducing conservativeness, while still guaranteeing recursive feasibility and
low computational complexity, is addressed. We propose a novel, less
restrictive scheme which is based on considering stability and recursive
feasibility separately. Through an explicit first step constraint we guarantee
recursive feasibility. In particular we guarantee the existence of a feasible
input trajectory at each time instant, but we only require that the input
sequence computed at time remains feasible at time for most
disturbances but not necessarily for all, which suffices for stability. To
overcome the computational complexity of probabilistic constraints, we propose
an offline constraint-tightening procedure, which can be efficiently solved via
a sampling approach to the desired accuracy. The online computational
complexity of the resulting Model Predictive Control (MPC) algorithm is similar
to that of a nominal MPC with terminal region. A numerical example, which
provides a comparison with classical, recursively feasible Stochastic MPC and
Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201
Sparse and Constrained Stochastic Predictive Control for Networked Systems
This article presents a novel class of control policies for networked control
of Lyapunov-stable linear systems with bounded inputs. The control channel is
assumed to have i.i.d. Bernoulli packet dropouts and the system is assumed to
be affected by additive stochastic noise. Our proposed class of policies is
affine in the past dropouts and saturated values of the past disturbances. We
further consider a regularization term in a quadratic performance index to
promote sparsity in control. We demonstrate how to augment the underlying
optimization problem with a constant negative drift constraint to ensure
mean-square boundedness of the closed-loop states, yielding a convex quadratic
program to be solved periodically online. The states of the closed-loop plant
under the receding horizon implementation of the proposed class of policies are
mean square bounded for any positive bound on the control and any non-zero
probability of successful transmission
On control of discrete-time state-dependent jump linear systems with probabilistic constraints: A receding horizon approach
In this article, we consider a receding horizon control of discrete-time
state-dependent jump linear systems, particular kind of stochastic switching
systems, subject to possibly unbounded random disturbances and probabilistic
state constraints. Due to a nature of the dynamical system and the constraints,
we consider a one-step receding horizon. Using inverse cumulative distribution
function, we convert the probabilistic state constraints to deterministic
constraints, and obtain a tractable deterministic receding horizon control
problem. We consider the receding control law to have a linear state-feedback
and an admissible offset term. We ensure mean square boundedness of the state
variable via solving linear matrix inequalities off-line, and solve the
receding horizon control problem on-line with control offset terms. We
illustrate the overall approach applied on a macroeconomic system
- …