991 research outputs found
From Specifications to Behavior: Maneuver Verification in a Semantic State Space
To realize a market entry of autonomous vehicles in the foreseeable future,
the behavior planning system will need to abide by the same rules that humans
follow. Product liability cannot be enforced without a proper solution to the
approval trap. In this paper, we define a semantic abstraction of the
continuous space and formalize traffic rules in linear temporal logic (LTL).
Sequences in the semantic state space represent maneuvers a high-level planner
could choose to execute. We check these maneuvers against the formalized
traffic rules using runtime verification. By using the standard model checker
NuSMV, we demonstrate the effectiveness of our approach and provide runtime
properties for the maneuver verification. We show that high-level behavior can
be verified in a semantic state space to fulfill a set of formalized rules,
which could serve as a step towards safety of the intended functionality.Comment: Published at IEEE Intelligent Vehicles Symposium (IV), 201
Falsification-Based Robust Adversarial Reinforcement Learning
Reinforcement learning (RL) has achieved tremendous progress in solving
various sequential decision-making problems, e.g., control tasks in robotics.
However, RL methods often fail to generalize to safety-critical scenarios since
policies are overfitted to training environments. Previously, robust
adversarial reinforcement learning (RARL) was proposed to train an adversarial
network that applies disturbances to a system, which improves robustness in
test scenarios. A drawback of neural-network-based adversaries is that
integrating system requirements without handcrafting sophisticated reward
signals is difficult. Safety falsification methods allow one to find a set of
initial conditions as well as an input sequence, such that the system violates
a given property formulated in temporal logic. In this paper, we propose
falsification-based RARL (FRARL), the first generic framework for integrating
temporal-logic falsification in adversarial learning to improve policy
robustness. With falsification method, we do not need to construct an extra
reward function for the adversary. We evaluate our approach on a braking
assistance system and an adaptive cruise control system of autonomous vehicles.
Experiments show that policies trained with a falsification-based adversary
generalize better and show less violation of the safety specification in test
scenarios than the ones trained without an adversary or with an adversarial
network.Comment: 11 pages, 3 figure
Search-based optimal motion planning for automated driving
This paper presents a framework for fast and robust motion planning designed
to facilitate automated driving. The framework allows for real-time computation
even for horizons of several hundred meters and thus enabling automated driving
in urban conditions. This is achieved through several features. Firstly, a
convenient geometrical representation of both the search space and driving
constraints enables the use of classical path planning approach. Thus, a wide
variety of constraints can be tackled simultaneously (other vehicles, traffic
lights, etc.). Secondly, an exact cost-to-go map, obtained by solving a relaxed
problem, is then used by A*-based algorithm with model predictive flavour in
order to compute the optimal motion trajectory. The algorithm takes into
account both distance and time horizons. The approach is validated within a
simulation study with realistic traffic scenarios. We demonstrate the
capability of the algorithm to devise plans both in fast and slow driving
conditions, even when full stop is required.Comment: Preprint accepted to 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2018). A supplementary video is
available at https://youtu.be/D5XJ5ncSuq
No driver, No Regulation? --Online Legal Driving Behavior Monitoring for Self-driving Vehicles
Defined traffic laws must be respected by all vehicles. However, it is
essential to know which behaviors violate the current laws, especially when a
responsibility issue is involved in an accident. This brings challenges of
digitizing human-driver-oriented traffic laws and monitoring vehicles'
behaviors continuously. To address these challenges, this paper aims to
digitize traffic law comprehensively and provide an application for online
monitoring of legal driving behavior for autonomous vehicles. This paper
introduces a layered trigger domain-based traffic law digitization architecture
with digitization-classified discussions and detailed atomic propositions for
online monitoring. The principal laws on a highway and at an intersection are
taken as examples, and the corresponding logic and atomic propositions are
introduced in detail. Finally, the digitized traffic laws are verified on the
Chinese highway and intersection datasets, and defined thresholds are further
discussed according to the driving behaviors in the considered dataset. This
study can help manufacturers and the government in defining specifications and
laws and can also be used as a useful reference in traffic laws compliance
decision-making. Source code is available on
https://github.com/SOTIF-AVLab/DOTL.Comment: 22 pages, 11 figure
A Deontic Logic Analysis of Autonomous Systems' Safety
We consider the pressing question of how to model, verify, and ensure that
autonomous systems meet certain \textit{obligations} (like the obligation to
respect traffic laws), and refrain from impermissible behavior (like recklessly
changing lanes). Temporal logics are heavily used in autonomous system design;
however, as we illustrate here, temporal (alethic) logics alone are
inappropriate for reasoning about obligations of autonomous systems. This paper
proposes the use of Dominance Act Utilitarianism (DAU), a deontic logic of
agency, to encode and reason about obligations of autonomous systems. We use
DAU to analyze Intel's Responsibility-Sensitive Safety (RSS) proposal as a
real-world case study. We demonstrate that DAU can express well-posed RSS
rules, formally derive undesirable consequences of these rules, illustrate how
DAU could help design systems that have specific obligations, and how to
model-check DAU obligations.Comment: 11 pages, 4 figures, In 23rd ACM International Conference on Hybrid
Systems: Computation and Contro
End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners
For human drivers, having rear and side-view mirrors is vital for safe
driving. They deliver a more complete view of what is happening around the car.
Human drivers also heavily exploit their mental map for navigation.
Nonetheless, several methods have been published that learn driving models with
only a front-facing camera and without a route planner. This lack of
information renders the self-driving task quite intractable. We investigate the
problem in a more realistic setting, which consists of a surround-view camera
system with eight cameras, a route planner, and a CAN bus reader. In
particular, we develop a sensor setup that provides data for a 360-degree view
of the area surrounding the vehicle, the driving route to the destination, and
low-level driving maneuvers (e.g. steering angle and speed) by human drivers.
With such a sensor setup we collect a new driving dataset, covering diverse
driving scenarios and varying weather/illumination conditions. Finally, we
learn a novel driving model by integrating information from the surround-view
cameras and the route planner. Two route planners are exploited: 1) by
representing the planned routes on OpenStreetMap as a stack of GPS coordinates,
and 2) by rendering the planned routes on TomTom Go Mobile and recording the
progression into a video. Our experiments show that: 1) 360-degree
surround-view cameras help avoid failures made with a single front-view camera,
in particular for city driving and intersection scenarios; and 2) route
planners help the driving task significantly, especially for steering angle
prediction.Comment: to be published at ECCV 201
- …