1,329 research outputs found
Reachability-Based Confidence-Aware Probabilistic Collision Detection in Highway Driving
Risk assessment is a crucial component of collision warning and avoidance
systems in intelligent vehicles. To accurately detect potential vehicle
collisions, reachability-based formal approaches have been developed to ensure
driving safety, but suffer from over-conservatism, potentially leading to
false-positive risk events in complicated real-world applications. In this
work, we combine two reachability analysis techniques, i.e., backward reachable
set (BRS) and stochastic forward reachable set (FRS), and propose an integrated
probabilistic collision detection framework in highway driving. Within the
framework, we can firstly use a BRS to formally check whether a two-vehicle
interaction is safe; otherwise, a prediction-based stochastic FRS is employed
to estimate a collision probability at each future time step. In doing so, the
framework can not only identify non-risky events with guaranteed safety, but
also provide accurate collision risk estimation in safety-critical events. To
construct the stochastic FRS, we develop a neural network-based acceleration
model for surrounding vehicles, and further incorporate confidence-aware
dynamic belief to improve the prediction accuracy. Extensive experiments are
conducted to validate the performance of the acceleration prediction model
based on naturalistic highway driving data, and the efficiency and
effectiveness of the framework with the infused confidence belief are tested
both in naturalistic and simulated highway scenarios. The proposed risk
assessment framework is promising in real-world applications.Comment: Under review at Engineering. arXiv admin note: text overlap with
arXiv:2205.0135
Recovery Policies for Safe Exploration of Lunar Permanently Shadowed Regions by a Solar-Powered Rover
The success of a multi-kilometre drive by a solar-powered rover at the lunar
south pole depends upon careful planning in space and time due to highly
dynamic solar illumination conditions. An additional challenge is that the
rover may be subject to random faults that can temporarily delay long-range
traverses. The majority of existing global spatiotemporal planners assume a
deterministic rover-environment model and do not account for random faults. In
this paper, we consider a random fault profile with a known, average spatial
fault rate. We introduce a methodology to compute recovery policies that
maximize the probability of survival of a solar-powered rover from different
start states. A recovery policy defines a set of recourse actions to reach a
safe location with sufficient battery energy remaining, given the local solar
illumination conditions. We solve a stochastic reach-avoid problem using
dynamic programming to find an optimal recovery policy. Our focus, in part, is
on the implications of state space discretization, which is required in
practical implementations. We propose a modified dynamic programming algorithm
that conservatively accounts for approximation errors. To demonstrate the
benefits of our approach, we compare against existing methods in scenarios
where a solar-powered rover seeks to safely exit from permanently shadowed
regions in the Cabeus area at the lunar south pole. We also highlight the
relevance of our methodology for mission formulation and trade safety analysis
by comparing different rover mobility models in simulated recovery drives from
the LCROSS impact region.Comment: In Acta Astronautica, vol. 213, pp. 708-724, Dec. 202
ISAACS: Iterative Soft Adversarial Actor-Critic for Safety
The deployment of robots in uncontrolled environments requires them to
operate robustly under previously unseen scenarios, like irregular terrain and
wind conditions. Unfortunately, while rigorous safety frameworks from robust
optimal control theory scale poorly to high-dimensional nonlinear dynamics,
control policies computed by more tractable "deep" methods lack guarantees and
tend to exhibit little robustness to uncertain operating conditions. This work
introduces a novel approach enabling scalable synthesis of robust
safety-preserving controllers for robotic systems with general nonlinear
dynamics subject to bounded modeling error by combining game-theoretic safety
analysis with adversarial reinforcement learning in simulation. Following a
soft actor-critic scheme, a safety-seeking fallback policy is co-trained with
an adversarial "disturbance" agent that aims to invoke the worst-case
realization of model error and training-to-deployment discrepancy allowed by
the designer's uncertainty. While the learned control policy does not
intrinsically guarantee safety, it is used to construct a real-time safety
filter (or shield) with robust safety guarantees based on forward reachability
rollouts. This shield can be used in conjunction with a safety-agnostic control
policy, precluding any task-driven actions that could result in loss of safety.
We evaluate our learning-based safety approach in a 5D race car simulator,
compare the learned safety policy to the numerically obtained optimal solution,
and empirically validate the robust safety guarantee of our proposed safety
shield against worst-case model discrepancy.Comment: Accepted in 5th Annual Learning for Dynamics & Control Conference
(L4DC), University of Pennsylvani
Progressive Analytics: A Computation Paradigm for Exploratory Data Analysis
Exploring data requires a fast feedback loop from the analyst to the system,
with a latency below about 10 seconds because of human cognitive limitations.
When data becomes large or analysis becomes complex, sequential computations
can no longer be completed in a few seconds and data exploration is severely
hampered. This article describes a novel computation paradigm called
Progressive Computation for Data Analysis or more concisely Progressive
Analytics, that brings at the programming language level a low-latency
guarantee by performing computations in a progressive fashion. Moving this
progressive computation at the language level relieves the programmer of
exploratory data analysis systems from implementing the whole analytics
pipeline in a progressive way from scratch, streamlining the implementation of
scalable exploratory data analysis systems. This article describes the new
paradigm through a prototype implementation called ProgressiVis, and explains
the requirements it implies through examples.Comment: 10 page
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Despite the improved accuracy of deep neural networks, the discovery of
adversarial examples has raised serious safety concerns. Most existing
approaches for crafting adversarial examples necessitate some knowledge
(architecture, parameters, etc.) of the network at hand. In this paper, we
focus on image classifiers and propose a feature-guided black-box approach to
test the safety of deep neural networks that requires no such knowledge. Our
algorithm employs object detection techniques such as SIFT (Scale Invariant
Feature Transform) to extract features from an image. These features are
converted into a mutable saliency distribution, where high probability is
assigned to pixels that affect the composition of the image with respect to the
human visual system. We formulate the crafting of adversarial examples as a
two-player turn-based stochastic game, where the first player's objective is to
minimise the distance to an adversarial example by manipulating the features,
and the second player can be cooperative, adversarial, or random. We show that,
theoretically, the two-player game can con- verge to the optimal strategy, and
that the optimal strategy represents a globally minimal adversarial image. For
Lipschitz networks, we also identify conditions that provide safety guarantees
that no adversarial examples exist. Using Monte Carlo tree search we gradually
explore the game state space to search for adversarial examples. Our
experiments show that, despite the black-box setting, manipulations guided by a
perception-based saliency distribution are competitive with state-of-the-art
methods that rely on white-box saliency matrices or sophisticated optimization
procedures. Finally, we show how our method can be used to evaluate robustness
of neural networks in safety-critical applications such as traffic sign
recognition in self-driving cars.Comment: 35 pages, 5 tables, 23 figure
- …