13,404 research outputs found
Robust receding horizon control for convex dynamics and bounded disturbances
A novel robust nonlinear model predictive control strategy is proposed for
systems with convex dynamics and convex constraints. Using a sequential convex
approximation approach, the scheme constructs tubes that contain predicted
trajectories, accounting for approximation errors and disturbances, and
guaranteeing constraint satisfaction. An optimal control problem is solved as a
sequence of convex programs, without the need of pre-computed error bounds. We
develop the scheme initially in the absence of external disturbances and show
that the proposed nominal approach is non-conservative, with the solutions of
successive convex programs converging to a locally optimal solution for the
original optimal control problem. We extend the approach to the case of
additive disturbances using a novel strategy for selecting linearization points
and seed trajectories. As a result we formulate a robust receding horizon
strategy with guarantees of recursive feasibility and stability of the
closed-loop system
An Improved Constraint-Tightening Approach for Stochastic MPC
The problem of achieving a good trade-off in Stochastic Model Predictive
Control between the competing goals of improving the average performance and
reducing conservativeness, while still guaranteeing recursive feasibility and
low computational complexity, is addressed. We propose a novel, less
restrictive scheme which is based on considering stability and recursive
feasibility separately. Through an explicit first step constraint we guarantee
recursive feasibility. In particular we guarantee the existence of a feasible
input trajectory at each time instant, but we only require that the input
sequence computed at time remains feasible at time for most
disturbances but not necessarily for all, which suffices for stability. To
overcome the computational complexity of probabilistic constraints, we propose
an offline constraint-tightening procedure, which can be efficiently solved via
a sampling approach to the desired accuracy. The online computational
complexity of the resulting Model Predictive Control (MPC) algorithm is similar
to that of a nominal MPC with terminal region. A numerical example, which
provides a comparison with classical, recursively feasible Stochastic MPC and
Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201
State feedback policies for robust receding horizon control: uniqueness, continuity, and stability
Published versio
Dynamic Tube MPC for Nonlinear Systems
Modeling error or external disturbances can severely degrade the performance
of Model Predictive Control (MPC) in real-world scenarios. Robust MPC (RMPC)
addresses this limitation by optimizing over feedback policies but at the
expense of increased computational complexity. Tube MPC is an approximate
solution strategy in which a robust controller, designed offline, keeps the
system in an invariant tube around a desired nominal trajectory, generated
online. Naturally, this decomposition is suboptimal, especially for systems
with changing objectives or operating conditions. In addition, many tube MPC
approaches are unable to capture state-dependent uncertainty due to the
complexity of calculating invariant tubes, resulting in overly-conservative
approximations. This work presents the Dynamic Tube MPC (DTMPC) framework for
nonlinear systems where both the tube geometry and open-loop trajectory are
optimized simultaneously. By using boundary layer sliding control, the tube
geometry can be expressed as a simple relation between control parameters and
uncertainty bound; enabling the tube geometry dynamics to be added to the
nominal MPC optimization with minimal increase in computational complexity. In
addition, DTMPC is able to leverage state-dependent uncertainty to reduce
conservativeness and improve optimization feasibility. DTMPC is demonstrated to
robustly perform obstacle avoidance and modify the tube geometry in response to
obstacle proximity
- …