650 research outputs found
Safe Planning in Dynamic Environments using Conformal Prediction
We propose a framework for planning in unknown dynamic environments with
probabilistic safety guarantees using conformal prediction. Particularly, we
design a model predictive controller (MPC) that uses i) trajectory predictions
of the dynamic environment, and ii) prediction regions quantifying the
uncertainty of the predictions. To obtain prediction regions, we use conformal
prediction, a statistical tool for uncertainty quantification, that requires
availability of offline trajectory data - a reasonable assumption in many
applications such as autonomous driving. The prediction regions are valid,
i.e., they hold with a user-defined probability, so that the MPC is provably
safe. We illustrate the results in the self-driving car simulator CARLA at a
pedestrian-filled intersection. The strength of our approach is compatibility
with state of the art trajectory predictors, e.g., RNNs and LSTMs, while making
no assumptions on the underlying trajectory-generating distribution. To the
best of our knowledge, these are the first results that provide valid safety
guarantees in such a setting
Recommended from our members
Game-Theoretic Safety Assurance for Human-Centered Robotic Systems
In order for autonomous systems like robots, drones, and self-driving cars to be reliably introduced into our society, they must have the ability to actively account for safety during their operation. While safety analysis has traditionally been conducted offline for controlled environments like cages on factory floors, the much higher complexity of open, human-populated spaces like our homes, cities, and roads makes it unviable to rely on common design-time assumptions, since these may be violated once the system is deployed. Instead, the next generation of robotic technologies will need to reason about safety online, constructing high-confidence assurances informed by ongoing observations of the environment and other agents, in spite of models of them being necessarily fallible.This dissertation aims to lay down the necessary foundations to enable autonomous systems to ensure their own safety in complex, changing, and uncertain environments, by explicitly reasoning about the gap between their models and the real world. It first introduces a suite of novel robust optimal control formulations and algorithmic tools that permit tractable safety analysis in time-varying, multi-agent systems, as well as safe real-time robotic navigation in partially unknown environments; these approaches are demonstrated on large-scale unmanned air traffic simulation and physical quadrotor platforms. After this, it draws on Bayesian machine learning methods to translate model-based guarantees into high-confidence assurances, monitoring the reliability of predictive models in light of changing evidence about the physical system and surrounding agents. This principle is first applied to a general safety framework allowing the use of learning-based control (e.g. reinforcement learning) for safety-critical robotic systems such as drones, and then combined with insights from cognitive science and dynamic game theory to enable safe human-centered navigation and interaction; these techniques are showcased on physical quadrotors—flying in unmodeled wind and among human pedestrians—and simulated highway driving. The dissertation ends with a discussion of challenges and opportunities ahead, including the bridging of safety analysis and reinforcement learning and the need to ``close the loop'' around learning and adaptation in order to deploy increasingly advanced autonomous systems with confidence
Vision-Based Uncertainty-Aware Motion Planning based on Probabilistic Semantic Segmentation
For safe operation, a robot must be able to avoid collisions in uncertain
environments. Existing approaches for motion planning under uncertainties often
assume parametric obstacle representations and Gaussian uncertainty, which can
be inaccurate. While visual perception can deliver a more accurate
representation of the environment, its use for safe motion planning is limited
by the inherent miscalibration of neural networks and the challenge of
obtaining adequate datasets. To address these limitations, we propose to employ
ensembles of deep semantic segmentation networks trained with massively
augmented datasets to ensure reliable probabilistic occupancy information. To
avoid conservatism during motion planning, we directly employ the probabilistic
perception in a scenario-based path planning approach. A velocity scheduling
scheme is applied to the path to ensure a safe motion despite tracking
inaccuracies. We demonstrate the effectiveness of the massive data augmentation
in combination with deep ensembles and the proposed scenario-based planning
approach in comparisons to state-of-the-art methods and validate our framework
in an experiment with a human hand as an obstacle
- …