37 research outputs found
Robust Model Predictive Control via Scenario Optimization
This paper discusses a novel probabilistic approach for the design of robust
model predictive control (MPC) laws for discrete-time linear systems affected
by parametric uncertainty and additive disturbances. The proposed technique is
based on the iterated solution, at each step, of a finite-horizon optimal
control problem (FHOCP) that takes into account a suitable number of randomly
extracted scenarios of uncertainty and disturbances, followed by a specific
command selection rule implemented in a receding horizon fashion. The scenario
FHOCP is always convex, also when the uncertain parameters and disturbance
belong to non-convex sets, and irrespective of how the model uncertainty
influences the system's matrices. Moreover, the computational complexity of the
proposed approach does not depend on the uncertainty/disturbance dimensions,
and scales quadratically with the control horizon. The main result in this
paper is related to the analysis of the closed loop system under
receding-horizon implementation of the scenario FHOCP, and essentially states
that the devised control law guarantees constraint satisfaction at each step
with some a-priori assigned probability p, while the system's state reaches the
target set either asymptotically, or in finite time with probability at least
p. The proposed method may be a valid alternative when other existing
techniques, either deterministic or stochastic, are not directly usable due to
excessive conservatism or to numerical intractability caused by lack of
convexity of the robust or chance-constrained optimization problem.Comment: This manuscript is a preprint of a paper accepted for publication in
the IEEE Transactions on Automatic Control, with DOI:
10.1109/TAC.2012.2203054, and is subject to IEEE copyright. The copy of
record will be available at http://ieeexplore.ieee.or
Stability for Receding-horizon Stochastic Model Predictive Control
A stochastic model predictive control (SMPC) approach is presented for
discrete-time linear systems with arbitrary time-invariant probabilistic
uncertainties and additive Gaussian process noise. Closed-loop stability of the
SMPC approach is established by appropriate selection of the cost function.
Polynomial chaos is used for uncertainty propagation through system dynamics.
The performance of the SMPC approach is demonstrated using the Van de Vusse
reactions.Comment: American Control Conference (ACC) 201
Stochastic model predictive control of LPV systems via scenario optimization
A stochastic receding-horizon control approach for constrained Linear Parameter Varying discrete-time systems is proposed in this paper. It is assumed that the time-varying parameters have stochastic nature and that the system's matrices are bounded but otherwise arbitrary nonlinear functions of these parameters. No specific assumption on the statistics of the parameters is required. By using a randomization approach, a scenario-based finite-horizon optimal control problem is formulated, where only a finite number M of sampled predicted parameter trajectories (‘scenarios') are considered. This problem is convex and its solution is a priori guaranteed to be probabilistically robust, up to a user-defined probability level p. The p level is linked to M by an analytic relationship, which establishes a tradeoff between computational complexity and robustness of the solution. Then, a receding horizon strategy is presented, involving the iterated solution of a scenario-based finite-horizon control problem at each time step. Our key result is to show that the state trajectories of the controlled system reach a terminal positively invariant set in finite time, either deterministically, or with probability no smaller than p. The features of the approach are illustrated by a numerical example
On the Sample Size of Random Convex Programs with Structured Dependence on the Uncertainty (Extended Version)
The "scenario approach" provides an intuitive method to address chance
constrained problems arising in control design for uncertain systems. It
addresses these problems by replacing the chance constraint with a finite
number of sampled constraints (scenarios). The sample size critically depends
on Helly's dimension, a quantity always upper bounded by the number of decision
variables. However, this standard bound can lead to computationally expensive
programs whose solutions are conservative in terms of cost and violation
probability. We derive improved bounds of Helly's dimension for problems where
the chance constraint has certain structural properties. The improved bounds
lower the number of scenarios required for these problems, leading both to
improved objective value and reduced computational complexity. Our results are
generally applicable to Randomized Model Predictive Control of chance
constrained linear systems with additive uncertainty and affine disturbance
feedback. The efficacy of the proposed bound is demonstrated on an inventory
management example.Comment: Accepted for publication at Automatic
Learning-based predictive control for linear systems: a unitary approach
A comprehensive approach addressing identification and control for
learningbased Model Predictive Control (MPC) for linear systems is presented.
The design technique yields a data-driven MPC law, based on a dataset collected
from the working plant. The method is indirect, i.e. it relies on a model
learning phase and a model-based control design one, devised in an integrated
manner. In the model learning phase, a twofold outcome is achieved: first,
different optimal p-steps ahead prediction models are obtained, to be used in
the MPC cost function; secondly, a perturbed state-space model is derived, to
be used for robust constraint satisfaction. Resorting to Set Membership
techniques, a characterization of the bounded model uncertainties is obtained,
which is a key feature for a successful application of the robust control
algorithm. In the control design phase, a robust MPC law is proposed, able to
track piece-wise constant reference signals, with guaranteed recursive
feasibility and convergence properties. The controller embeds multistep
predictors in the cost function, it ensures robust constraints satisfaction
thanks to the learnt uncertainty model, and it can deal with possibly
unfeasible reference values. The proposed approach is finally tested in a
numerical example
Stochastic Nonlinear Model Predictive Control with Efficient Sample Approximation of Chance Constraints
This paper presents a stochastic model predictive control approach for
nonlinear systems subject to time-invariant probabilistic uncertainties in
model parameters and initial conditions. The stochastic optimal control problem
entails a cost function in terms of expected values and higher moments of the
states, and chance constraints that ensure probabilistic constraint
satisfaction. The generalized polynomial chaos framework is used to propagate
the time-invariant stochastic uncertainties through the nonlinear system
dynamics, and to efficiently sample from the probability densities of the
states to approximate the satisfaction probability of the chance constraints.
To increase computational efficiency by avoiding excessive sampling, a
statistical analysis is proposed to systematically determine a-priori the least
conservative constraint tightening required at a given sample size to guarantee
a desired feasibility probability of the sample-approximated chance constraint
optimization problem. In addition, a method is presented for sample-based
approximation of the analytic gradients of the chance constraints, which
increases the optimization efficiency significantly. The proposed stochastic
nonlinear model predictive control approach is applicable to a broad class of
nonlinear systems with the sufficient condition that each term is analytic with
respect to the states, and separable with respect to the inputs, states and
parameters. The closed-loop performance of the proposed approach is evaluated
using the Williams-Otto reactor with seven states, and ten uncertain parameters
and initial conditions. The results demonstrate the efficiency of the approach
for real-time stochastic model predictive control and its capability to
systematically account for probabilistic uncertainties in contrast to a
nonlinear model predictive control approaches.Comment: Submitted to Journal of Process Contro