6,911 research outputs found
Multi-Agent Reachability Calibration with Conformal Prediction
We investigate methods to provide safety assurances for autonomous agents
that incorporate predictions of other, uncontrolled agents' behavior into their
own trajectory planning. Given a learning-based forecasting model that predicts
agents' trajectories, we introduce a method for providing probabilistic
assurances on the model's prediction error with calibrated confidence
intervals. Through quantile regression, conformal prediction, and reachability
analysis, our method generates probabilistically safe and dynamically feasible
prediction sets. We showcase their utility in certifying the safety of planning
algorithms, both in simulations using actual autonomous driving data and in an
experiment with Boeing vehicles
Conformal Policy Learning for Sensorimotor Control Under Distribution Shifts
This paper focuses on the problem of detecting and reacting to changes in the
distribution of a sensorimotor controller's observables. The key idea is the
design of switching policies that can take conformal quantiles as input, which
we define as conformal policy learning, that allows robots to detect
distribution shifts with formal statistical guarantees. We show how to design
such policies by using conformal quantiles to switch between base policies with
different characteristics, e.g. safety or speed, or directly augmenting a
policy observation with a quantile and training it with reinforcement learning.
Theoretically, we show that such policies achieve the formal convergence
guarantees in finite time. In addition, we thoroughly evaluate their advantages
and limitations on two compelling use cases: simulated autonomous driving and
active perception with a physical quadruped. Empirical results demonstrate that
our approach outperforms five baselines. It is also the simplest of the
baseline strategies besides one ablation. Being easy to use, flexible, and with
formal guarantees, our work demonstrates how conformal prediction can be an
effective tool for sensorimotor learning under uncertainty.Comment: Conformal Policy Learnin
How to Certify Machine Learning Based Safety-critical Systems? A Systematic Literature Review
Context: Machine Learning (ML) has been at the heart of many innovations over
the past years. However, including it in so-called 'safety-critical' systems
such as automotive or aeronautic has proven to be very challenging, since the
shift in paradigm that ML brings completely changes traditional certification
approaches.
Objective: This paper aims to elucidate challenges related to the
certification of ML-based safety-critical systems, as well as the solutions
that are proposed in the literature to tackle them, answering the question 'How
to Certify Machine Learning Based Safety-critical Systems?'.
Method: We conduct a Systematic Literature Review (SLR) of research papers
published between 2015 to 2020, covering topics related to the certification of
ML systems. In total, we identified 217 papers covering topics considered to be
the main pillars of ML certification: Robustness, Uncertainty, Explainability,
Verification, Safe Reinforcement Learning, and Direct Certification. We
analyzed the main trends and problems of each sub-field and provided summaries
of the papers extracted.
Results: The SLR results highlighted the enthusiasm of the community for this
subject, as well as the lack of diversity in terms of datasets and type of
models. It also emphasized the need to further develop connections between
academia and industries to deepen the domain study. Finally, it also
illustrated the necessity to build connections between the above mention main
pillars that are for now mainly studied separately.
Conclusion: We highlighted current efforts deployed to enable the
certification of ML based software systems, and discuss some future research
directions.Comment: 60 pages (92 pages with references and complements), submitted to a
journal (Automated Software Engineering). Changes: Emphasizing difference
traditional software engineering / ML approach. Adding Related Works, Threats
to Validity and Complementary Materials. Adding a table listing papers
reference for each section/subsection
Safe Planning in Dynamic Environments using Conformal Prediction
We propose a framework for planning in unknown dynamic environments with
probabilistic safety guarantees using conformal prediction. Particularly, we
design a model predictive controller (MPC) that uses i) trajectory predictions
of the dynamic environment, and ii) prediction regions quantifying the
uncertainty of the predictions. To obtain prediction regions, we use conformal
prediction, a statistical tool for uncertainty quantification, that requires
availability of offline trajectory data - a reasonable assumption in many
applications such as autonomous driving. The prediction regions are valid,
i.e., they hold with a user-defined probability, so that the MPC is provably
safe. We illustrate the results in the self-driving car simulator CARLA at a
pedestrian-filled intersection. The strength of our approach is compatibility
with state of the art trajectory predictors, e.g., RNNs and LSTMs, while making
no assumptions on the underlying trajectory-generating distribution. To the
best of our knowledge, these are the first results that provide valid safety
guarantees in such a setting
GAS: Generating Fast and Accurate Surrogate Models for Autonomous Vehicle Systems
Modern autonomous vehicle systems use complex perception and control
components. These components can rapidly change during development of such
systems, requiring constant re-testing. Unfortunately, high-fidelity
simulations of these complex systems for evaluating vehicle safety are costly.
The complexity also hinders the creation of less computationally intensive
surrogate models.
We present GAS, the first approach for creating surrogate models of complete
(perception, control, and dynamics) autonomous vehicle systems containing
complex perception and/or control components. GAS's two-stage approach first
replaces complex perception components with a perception model. Then, GAS
constructs a polynomial surrogate model of the complete vehicle system using
Generalized Polynomial Chaos (GPC). We demonstrate the use of these surrogate
models in two applications. First, we estimate the probability that the vehicle
will enter an unsafe state over time. Second, we perform global sensitivity
analysis of the vehicle system with respect to its state in a previous time
step. GAS's approach also allows for reuse of the perception model when vehicle
control and dynamics characteristics are altered during vehicle development,
saving significant time.
We consider five scenarios concerning crop management vehicles that must not
crash into adjacent crops, self driving cars that must stay within their lane,
and unmanned aircraft that must avoid collision. Each of the systems in these
scenarios contain a complex perception or control component. Using GAS, we
generate surrogate models for these systems, and evaluate the generated models
in the applications described above. GAS's surrogate models provide an average
speedup of for safe state probability estimation (minimum
) and for sensitivity analysis (minimum ),
while still maintaining high accuracy
Bayesian Learning-Based Adaptive Control for Safety Critical Systems
Deep learning has enjoyed much recent success, and applying state-of-the-art
model learning methods to controls is an exciting prospect. However, there is a
strong reluctance to use these methods on safety-critical systems, which have
constraints on safety, stability, and real-time performance. We propose a
framework which satisfies these constraints while allowing the use of deep
neural networks for learning model uncertainties. Central to our method is the
use of Bayesian model learning, which provides an avenue for maintaining
appropriate degrees of caution in the face of the unknown. In the proposed
approach, we develop an adaptive control framework leveraging the theory of
stochastic CLFs (Control Lyapunov Functions) and stochastic CBFs (Control
Barrier Functions) along with tractable Bayesian model learning via Gaussian
Processes or Bayesian neural networks. Under reasonable assumptions, we
guarantee stability and safety while adapting to unknown dynamics with
probability 1. We demonstrate this architecture for high-speed terrestrial
mobility targeting potential applications in safety-critical high-speed Mars
rover missions.Comment: Corrected an error in section II, where previously the problem was
introduced in a non-stochastic setting and wrongly assumed the solution to an
ODE with Gaussian distributed parametric uncertainty was equivalent to an SDE
with a learned diffusion term. See Lew, T et al. "On the Problem of
Reformulating Systems with Uncertain Dynamics as a Stochastic Differential
Equation
Efficient Uncertainty Quantification and Reduction for Over-Parameterized Neural Networks
Uncertainty quantification (UQ) is important for reliability assessment and
enhancement of machine learning models. In deep learning, uncertainties arise
not only from data, but also from the training procedure that often injects
substantial noises and biases. These hinder the attainment of statistical
guarantees and, moreover, impose computational challenges on UQ due to the need
for repeated network retraining. Building upon the recent neural tangent kernel
theory, we create statistically guaranteed schemes to principally
\emph{quantify}, and \emph{remove}, the procedural uncertainty of
over-parameterized neural networks with very low computation effort. In
particular, our approach, based on what we call a procedural-noise-correcting
(PNC) predictor, removes the procedural uncertainty by using only \emph{one}
auxiliary network that is trained on a suitably labeled data set, instead of
many retrained networks employed in deep ensembles. Moreover, by combining our
PNC predictor with suitable light-computation resampling methods, we build
several approaches to construct asymptotically exact-coverage confidence
intervals using as low as four trained networks without additional overheads
- …