16,873 research outputs found
Robust adaptive MPC using control contraction metrics
We present a robust adaptive model predictive control (MPC) framework for
nonlinear continuous-time systems with bounded parametric uncertainty and
additive disturbance. We utilize general control contraction metrics (CCMs) to
parameterize a homothetic tube around a nominal prediction that contains all
uncertain trajectories. Furthermore, we incorporate model adaptation using
set-membership estimation. As a result, the proposed MPC formulation is
applicable to a large class of nonlinear systems, reduces conservatism during
online operation, and guarantees robust constraint satisfaction and convergence
to a neighborhood of the desired setpoint. One of the main technical
contributions is the derivation of corresponding tube dynamics based on CCMs
that account for the state and input dependent nature of the model mismatch.
Furthermore, we online optimize over the nominal parameter, which enables
general set-membership updates for the parametric uncertainty in the MPC.
Benefits of the proposed homothetic tube MPC and online adaptation are
demonstrated using a numerical example involving a planar quadrotor.Comment: This is the accepted version of the paper in Automatica, 202
Learning an Approximate Model Predictive Controller with Guarantees
A supervised learning framework is proposed to approximate a model predictive
controller (MPC) with reduced computational complexity and guarantees on
stability and constraint satisfaction. The framework can be used for a wide
class of nonlinear systems. Any standard supervised learning technique (e.g.
neural networks) can be employed to approximate the MPC from samples. In order
to obtain closed-loop guarantees for the learned MPC, a robust MPC design is
combined with statistical learning bounds. The MPC design ensures robustness to
inaccurate inputs within given bounds, and Hoeffding's Inequality is used to
validate that the learned MPC satisfies these bounds with high confidence. The
result is a closed-loop statistical guarantee on stability and constraint
satisfaction for the learned MPC. The proposed learning-based MPC framework is
illustrated on a nonlinear benchmark problem, for which we learn a neural
network controller with guarantees.Comment: 6 pages, 3 figures, to appear in IEEE Control Systems Letter
Stochastic Nonlinear Model Predictive Control with Efficient Sample Approximation of Chance Constraints
This paper presents a stochastic model predictive control approach for
nonlinear systems subject to time-invariant probabilistic uncertainties in
model parameters and initial conditions. The stochastic optimal control problem
entails a cost function in terms of expected values and higher moments of the
states, and chance constraints that ensure probabilistic constraint
satisfaction. The generalized polynomial chaos framework is used to propagate
the time-invariant stochastic uncertainties through the nonlinear system
dynamics, and to efficiently sample from the probability densities of the
states to approximate the satisfaction probability of the chance constraints.
To increase computational efficiency by avoiding excessive sampling, a
statistical analysis is proposed to systematically determine a-priori the least
conservative constraint tightening required at a given sample size to guarantee
a desired feasibility probability of the sample-approximated chance constraint
optimization problem. In addition, a method is presented for sample-based
approximation of the analytic gradients of the chance constraints, which
increases the optimization efficiency significantly. The proposed stochastic
nonlinear model predictive control approach is applicable to a broad class of
nonlinear systems with the sufficient condition that each term is analytic with
respect to the states, and separable with respect to the inputs, states and
parameters. The closed-loop performance of the proposed approach is evaluated
using the Williams-Otto reactor with seven states, and ten uncertain parameters
and initial conditions. The results demonstrate the efficiency of the approach
for real-time stochastic model predictive control and its capability to
systematically account for probabilistic uncertainties in contrast to a
nonlinear model predictive control approaches.Comment: Submitted to Journal of Process Contro
From Uncertainty Data to Robust Policies for Temporal Logic Planning
We consider the problem of synthesizing robust disturbance feedback policies
for systems performing complex tasks. We formulate the tasks as linear temporal
logic specifications and encode them into an optimization framework via
mixed-integer constraints. Both the system dynamics and the specifications are
known but affected by uncertainty. The distribution of the uncertainty is
unknown, however realizations can be obtained. We introduce a data-driven
approach where the constraints are fulfilled for a set of realizations and
provide probabilistic generalization guarantees as a function of the number of
considered realizations. We use separate chance constraints for the
satisfaction of the specification and operational constraints. This allows us
to quantify their violation probabilities independently. We compute disturbance
feedback policies as solutions of mixed-integer linear or quadratic
optimization problems. By using feedback we can exploit information of past
realizations and provide feasibility for a wider range of situations compared
to static input sequences. We demonstrate the proposed method on two robust
motion-planning case studies for autonomous driving
A Probabilistic Approach to Robust Optimal Experiment Design with Chance Constraints
Accurate estimation of parameters is paramount in developing high-fidelity
models for complex dynamical systems. Model-based optimal experiment design
(OED) approaches enable systematic design of dynamic experiments to generate
input-output data sets with high information content for parameter estimation.
Standard OED approaches however face two challenges: (i) experiment design
under incomplete system information due to unknown true parameters, which
usually requires many iterations of OED; (ii) incapability of systematically
accounting for the inherent uncertainties of complex systems, which can lead to
diminished effectiveness of the designed optimal excitation signal as well as
violation of system constraints. This paper presents a robust OED approach for
nonlinear systems with arbitrarily-shaped time-invariant probabilistic
uncertainties. Polynomial chaos is used for efficient uncertainty propagation.
The distinct feature of the robust OED approach is the inclusion of chance
constraints to ensure constraint satisfaction in a stochastic setting. The
presented approach is demonstrated by optimal experimental design for the
JAK-STAT5 signaling pathway that regulates various cellular processes in a
biological cell.Comment: Submitted to ADCHEM 201
- …