18,923 research outputs found
Stochastic Nonlinear Model Predictive Control with Efficient Sample Approximation of Chance Constraints
This paper presents a stochastic model predictive control approach for
nonlinear systems subject to time-invariant probabilistic uncertainties in
model parameters and initial conditions. The stochastic optimal control problem
entails a cost function in terms of expected values and higher moments of the
states, and chance constraints that ensure probabilistic constraint
satisfaction. The generalized polynomial chaos framework is used to propagate
the time-invariant stochastic uncertainties through the nonlinear system
dynamics, and to efficiently sample from the probability densities of the
states to approximate the satisfaction probability of the chance constraints.
To increase computational efficiency by avoiding excessive sampling, a
statistical analysis is proposed to systematically determine a-priori the least
conservative constraint tightening required at a given sample size to guarantee
a desired feasibility probability of the sample-approximated chance constraint
optimization problem. In addition, a method is presented for sample-based
approximation of the analytic gradients of the chance constraints, which
increases the optimization efficiency significantly. The proposed stochastic
nonlinear model predictive control approach is applicable to a broad class of
nonlinear systems with the sufficient condition that each term is analytic with
respect to the states, and separable with respect to the inputs, states and
parameters. The closed-loop performance of the proposed approach is evaluated
using the Williams-Otto reactor with seven states, and ten uncertain parameters
and initial conditions. The results demonstrate the efficiency of the approach
for real-time stochastic model predictive control and its capability to
systematically account for probabilistic uncertainties in contrast to a
nonlinear model predictive control approaches.Comment: Submitted to Journal of Process Contro
Multi-contact Stochastic Predictive Control for Legged Robots with Contact Locations Uncertainty
Trajectory optimization under uncertainties is a challenging problem for
robots in contact with the environment. Such uncertainties are inevitable due
to estimation errors, control imperfections, and model mismatches between
planning models used for control and the real robot dynamics. This induces
control policies that could violate the contact location constraints by making
contact at unintended locations, and as a consequence leading to unsafe motion
plans. This work addresses the problem of robust kino-dynamic whole-body
trajectory optimization using stochastic nonlinear model predictive control
(SNMPC) by considering additive uncertainties on the model dynamics subject to
contact location chance-constraints as a function of robot's full kinematics.
We demonstrate the benefit of using SNMPC over classic nonlinear MPC (NMPC) for
whole-body trajectory optimization in terms of contact location constraint
satisfaction (safety). We run extensive Monte-Carlo simulations for a quadruped
robot performing agile trotting and bounding motions over small stepping
stones, where contact location satisfaction becomes critical. Our results show
that SNMPC is able to perform all motions safely with 100% success rate, while
NMPC failed 48.3% of all motions
Constrained Model-Free Reinforcement Learning for Process Optimization
Reinforcement learning (RL) is a control approach that can handle nonlinear
stochastic optimal control problems. However, despite the promise exhibited, RL
has yet to see marked translation to industrial practice primarily due to its
inability to satisfy state constraints. In this work we aim to address this
challenge. We propose an 'oracle'-assisted constrained Q-learning algorithm
that guarantees the satisfaction of joint chance constraints with a high
probability, which is crucial for safety critical tasks. To achieve this,
constraint tightening (backoffs) are introduced and adjusted using Broyden's
method, hence making them self-tuned. This results in a general methodology
that can be imbued into approximate dynamic programming-based algorithms to
ensure constraint satisfaction with high probability. Finally, we present case
studies that analyze the performance of the proposed approach and compare this
algorithm with model predictive control (MPC). The favorable performance of
this algorithm signifies a step toward the incorporation of RL into real world
optimization and control of engineering systems, where constraints are
essential in ensuring safety
Learning an Approximate Model Predictive Controller with Guarantees
A supervised learning framework is proposed to approximate a model predictive
controller (MPC) with reduced computational complexity and guarantees on
stability and constraint satisfaction. The framework can be used for a wide
class of nonlinear systems. Any standard supervised learning technique (e.g.
neural networks) can be employed to approximate the MPC from samples. In order
to obtain closed-loop guarantees for the learned MPC, a robust MPC design is
combined with statistical learning bounds. The MPC design ensures robustness to
inaccurate inputs within given bounds, and Hoeffding's Inequality is used to
validate that the learned MPC satisfies these bounds with high confidence. The
result is a closed-loop statistical guarantee on stability and constraint
satisfaction for the learned MPC. The proposed learning-based MPC framework is
illustrated on a nonlinear benchmark problem, for which we learn a neural
network controller with guarantees.Comment: 6 pages, 3 figures, to appear in IEEE Control Systems Letter
A Probabilistic Approach to Robust Optimal Experiment Design with Chance Constraints
Accurate estimation of parameters is paramount in developing high-fidelity
models for complex dynamical systems. Model-based optimal experiment design
(OED) approaches enable systematic design of dynamic experiments to generate
input-output data sets with high information content for parameter estimation.
Standard OED approaches however face two challenges: (i) experiment design
under incomplete system information due to unknown true parameters, which
usually requires many iterations of OED; (ii) incapability of systematically
accounting for the inherent uncertainties of complex systems, which can lead to
diminished effectiveness of the designed optimal excitation signal as well as
violation of system constraints. This paper presents a robust OED approach for
nonlinear systems with arbitrarily-shaped time-invariant probabilistic
uncertainties. Polynomial chaos is used for efficient uncertainty propagation.
The distinct feature of the robust OED approach is the inclusion of chance
constraints to ensure constraint satisfaction in a stochastic setting. The
presented approach is demonstrated by optimal experimental design for the
JAK-STAT5 signaling pathway that regulates various cellular processes in a
biological cell.Comment: Submitted to ADCHEM 201
- …