380 research outputs found
Tackling Occlusions & Limited Sensor Range with Set-based Safety Verification
Provable safety is one of the most critical challenges in automated driving.
The behavior of numerous traffic participants in a scene cannot be predicted
reliably due to complex interdependencies and the indiscriminate behavior of
humans. Additionally, we face high uncertainties and only incomplete
environment knowledge. Recent approaches minimize risk with probabilistic and
machine learning methods - even under occlusions. These generate comfortable
behavior with good traffic flow, but cannot guarantee safety of their
maneuvers.
Therefore, we contribute a safety verification method for trajectories under
occlusions. The field-of-view of the ego vehicle and a map are used to identify
critical sensing field edges, each representing a potentially hidden obstacle.
The state of occluded obstacles is unknown, but can be over-approximated by
intervals over all possible states.
Then set-based methods are extended to provide occupancy predictions for
obstacles with state intervals. The proposed method can verify the safety of
given trajectories (e.g. if they ensure collision-free fail-safe maneuver
options) w.r.t. arbitrary safe-state formulations. The potential for provably
safe trajectory planning is shown in three evaluative scenarios
Minimizing Safety Interference for Safe and Comfortable Automated Driving with Distributional Reinforcement Learning
Despite recent advances in reinforcement learning (RL), its application in
safety critical domains like autonomous vehicles is still challenging. Although
punishing RL agents for risky situations can help to learn safe policies, it
may also lead to highly conservative behavior. In this paper, we propose a
distributional RL framework in order to learn adaptive policies that can tune
their level of conservativity at run-time based on the desired comfort and
utility. Using a proactive safety verification approach, the proposed framework
can guarantee that actions generated from RL are fail-safe according to the
worst-case assumptions. Concurrently, the policy is encouraged to minimize
safety interference and generate more comfortable behavior. We trained and
evaluated the proposed approach and baseline policies using a high level
simulator with a variety of randomized scenarios including several corner cases
which rarely happen in reality but are very crucial. In light of our
experiments, the behavior of policies learned using distributional RL can be
adaptive at run-time and robust to the environment uncertainty. Quantitatively,
the learned distributional RL agent drives in average 8 seconds faster than the
normal DQN policy and requires 83\% less safety interference compared to the
rule-based policy with slightly increasing the average crossing time. We also
study sensitivity of the learned policy in environments with higher perception
noise and show that our algorithm learns policies that can still drive reliable
when the perception noise is two times higher than the training configuration
for automated merging and crossing at occluded intersections
Decision-theoretic MPC: Motion Planning with Weighted Maneuver Preferences Under Uncertainty
Continuous optimization based motion planners require deciding on a maneuver
homotopy before optimizing the trajectory. Under uncertainty, maneuver
intentions of other participants can be unclear, and the vehicle might not be
able to decide on the most suitable maneuver. This work introduces a method
that incorporates multiple maneuver preferences in planning. It optimizes the
trajectory by considering weighted maneuver preferences together with
uncertainties ranging from perception to prediction while ensuring the
feasibility of a chance-constrained fallback option. Evaluations in both
driving experiments and simulation studies show enhanced interaction
capabilities and comfort levels compared to conventional planners, which
consider only a single maneuver
Decision-Making for Automated Vehicles Using a Hierarchical Behavior-Based Arbitration Scheme
Behavior planning and decision-making are some of the biggest challenges for
highly automated systems. A fully automated vehicle (AV) is confronted with
numerous tactical and strategical choices. Most state-of-the-art AV platforms
implement tactical and strategical behavior generation using finite state
machines. However, these usually result in poor explainability, maintainability
and scalability. Research in robotics has raised many architectures to mitigate
these problems, most interestingly behavior-based systems and hybrid
derivatives. Inspired by these approaches, we propose a hierarchical
behavior-based architecture for tactical and strategical behavior generation in
automated driving. It is a generalizing and scalable decision-making framework,
utilizing modular behavior blocks to compose more complex behaviors in a
bottom-up approach. The system is capable of combining a variety of scenario-
and methodology-specific solutions, like POMDPs, RRT* or learning-based
behavior, into one understandable and traceable architecture. We extend the
hierarchical behavior-based arbitration concept to address scenarios where
multiple behavior options are applicable but have no clear priority against
each other. Then, we formulate the behavior generation stack for automated
driving in urban and highway environments, incorporating parking and emergency
behaviors as well. Finally, we illustrate our design in an explanatory
evaluation
Reducing Safety Interventions in Provably Safe Reinforcement Learning
Deep Reinforcement Learning (RL) has shown promise in addressing complex
robotic challenges. In real-world applications, RL is often accompanied by
failsafe controllers as a last resort to avoid catastrophic events. While
necessary for safety, these interventions can result in undesirable behaviors,
such as abrupt braking or aggressive steering. This paper proposes two safety
intervention reduction methods: proactive replacement and proactive projection,
which change the action of the agent if it leads to a potential failsafe
intervention. These approaches are compared to state-of-the-art constrained RL
on the OpenAI safety gym benchmark and a human-robot collaboration task. Our
study demonstrates that the combination of our method with provably safe RL
leads to high-performing policies with zero safety violations and a low number
of failsafe interventions. Our versatile method can be applied to a wide range
of real-world robotic tasks, while effectively improving safety without
sacrificing task performance.Comment: 8 pages, 6 figure
Probabilistic Motion Planning for Automated Vehicles
This thesis targets the problem of motion planning for automated vehicles. As a prerequisite for their on-road deployment, automated vehicles must show an appropriate and reliable driving behavior in mixed traffic, i.e. alongside human drivers. Besides the uncertainties resulting from imperfect perception, occlusions and limited sensor range, also the uncertainties in the behavior of other traffic participants have to be considered.
Related approaches for motion planning in mixed traffic often employ a deterministic problem formulation. The solution of such formulations is restricted to a single trajectory. Deviations from the prediction of other traffic participants are accounted for during replanning, while large uncertainties lead to conservative and over-cautious behavior. As a result of the shortcomings of these formulations in cooperative scenarios and scenarios with severe uncertainties, probabilistic approaches are pursued. Due to the need for real-time capability, however, a holistic uncertainty treatment often induces a strong limitation of the action space of automated vehicles. Moreover, safety and traffic rule compliance are often not considered.
Thus, in this work, three motion planning approaches and a scenario-based safety approach are presented. The safety approach is based on an existing concept, which targets the guarantee that automated vehicles will never cause accidents. This concept is enhanced by the consideration of traffic rules for crossing and merging traffic, occlusions, limited sensor range and lane changes. The three presented motion planning approaches are targeted towards the different predominant uncertainties in different scenarios, while operating in a continuous action space.
For non-interactive scenarios with clear precedence, a probabilistic approach is presented. The problem is modeled as a partially observable Markov decision process (POMDP). In contrast to existing approaches, the underlying assumption is that the prediction of the future progression of the uncertainty in the behavior of other traffic participants can be performed independently of the automated vehicle\u27s motion plan. In addition to this prediction of currently visible traffic participants, the influence of occlusions and limited sensor range is considered. Despite its thorough uncertainty consideration, the presented approach facilitates planning in a continuous action space.
Two further approaches are targeted towards the predominant uncertainties in interactive scenarios. In order to facilitate lane changes in dense traffic, a rule-based approach is proposed. The latter seeks to actively reduce the uncertainty in whether other vehicles willingly make room for a lane change. The generated trajectories are safe and traffic rule compliant with respect to the presented safety approach. To facilitate cooperation in scenarios without clear precedence, a multi-agent approach is presented. The globally optimal solution to the multi-agent problem is first analyzed regarding its ambiguity. If an unambiguous, cooperative solution is found, it is pursued. Still, the compliance of other vehicles with the presumed cooperation model is checked, and a conservative fallback trajectory is pursued in case of non-compliance.
The performance of the presented approaches is shown in various scenarios with intersecting lanes, partly with limited visibility, as well as lane changes and a narrowing without predefined right of way
- …