2,004 research outputs found
Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems
Learning-based control algorithms require data collection with abundant
supervision for training. Safe exploration algorithms ensure the safety of this
data collection process even when only partial knowledge is available. We
present a new approach for optimal motion planning with safe exploration that
integrates chance-constrained stochastic optimal control with dynamics learning
and feedback control. We derive an iterative convex optimization algorithm that
solves an \underline{Info}rmation-cost \underline{S}tochastic
\underline{N}onlinear \underline{O}ptimal \underline{C}ontrol problem
(Info-SNOC). The optimization objective encodes both optimal performance and
exploration for learning, and the safety is incorporated as distributionally
robust chance constraints. The dynamics are predicted from a robust regression
model that is learned from data. The Info-SNOC algorithm is used to compute a
sub-optimal pool of safe motion plans that aid in exploration for learning
unknown residual dynamics under safety constraints. A stable feedback
controller is used to execute the motion plan and collect data for model
learning. We prove the safety of rollout from our exploration method and
reduction in uncertainty over epochs, thereby guaranteeing the consistency of
our learning method. We validate the effectiveness of Info-SNOC by designing
and implementing a pool of safe trajectories for a planar robot. We demonstrate
that our approach has higher success rate in ensuring safety when compared to a
deterministic trajectory optimization approach.Comment: Submitted to RA-L 2020, review-
Safe Q-learning for continuous-time linear systems
Q-learning is a promising method for solving optimal control problems for
uncertain systems without the explicit need for system identification. However,
approaches for continuous-time Q-learning have limited provable safety
guarantees, which restrict their applicability to real-time safety-critical
systems. This paper proposes a safe Q-learning algorithm for partially unknown
linear time-invariant systems to solve the linear quadratic regulator problem
with user-defined state constraints. We frame the safe Q-learning problem as a
constrained optimal control problem using reciprocal control barrier functions
and show that such an extension provides a safety-assured control policy. To
the best of our knowledge, Q-learning for continuous-time systems with state
constraints has not yet been reported in the literature
- …