17 research outputs found
Tackling Occlusions & Limited Sensor Range with Set-based Safety Verification
Provable safety is one of the most critical challenges in automated driving.
The behavior of numerous traffic participants in a scene cannot be predicted
reliably due to complex interdependencies and the indiscriminate behavior of
humans. Additionally, we face high uncertainties and only incomplete
environment knowledge. Recent approaches minimize risk with probabilistic and
machine learning methods - even under occlusions. These generate comfortable
behavior with good traffic flow, but cannot guarantee safety of their
maneuvers.
Therefore, we contribute a safety verification method for trajectories under
occlusions. The field-of-view of the ego vehicle and a map are used to identify
critical sensing field edges, each representing a potentially hidden obstacle.
The state of occluded obstacles is unknown, but can be over-approximated by
intervals over all possible states.
Then set-based methods are extended to provide occupancy predictions for
obstacles with state intervals. The proposed method can verify the safety of
given trajectories (e.g. if they ensure collision-free fail-safe maneuver
options) w.r.t. arbitrary safe-state formulations. The potential for provably
safe trajectory planning is shown in three evaluative scenarios
Socially-Compatible Behavior Design of Autonomous Vehicles with Verification on Real Human Data
As more and more autonomous vehicles (AVs) are being deployed on public
roads, designing socially compatible behaviors for them is becoming
increasingly important. In order to generate safe and efficient actions, AVs
need to not only predict the future behaviors of other traffic participants,
but also be aware of the uncertainties associated with such behavior
prediction. In this paper, we propose an uncertain-aware integrated prediction
and planning (UAPP) framework. It allows the AVs to infer the characteristics
of other road users online and generate behaviors optimizing not only their own
rewards, but also their courtesy to others, and their confidence regarding the
prediction uncertainties. We first propose the definitions for courtesy and
confidence. Based on that, their influences on the behaviors of AVs in
interactive driving scenarios are explored. Moreover, we evaluate the proposed
algorithm on naturalistic human driving data by comparing the generated
behavior against ground truth. Results show that the online inference can
significantly improve the human-likeness of the generated behaviors.
Furthermore, we find that human drivers show great courtesy to others, even for
those without right-of-way. We also find that such driving preferences vary
significantly in different cultures.Comment: Accepted by IEEE Robotics and Automation Letters. January 202