1,194 research outputs found
Learning to drive via Apprenticeship Learning and Deep Reinforcement Learning
With the implementation of reinforcement learning (RL) algorithms, current
state-of-art autonomous vehicle technology have the potential to get closer to
full automation. However, most of the applications have been limited to game
domains or discrete action space which are far from the real world driving.
Moreover, it is very tough to tune the parameters of reward mechanism since the
driving styles vary a lot among the different users. For instance, an
aggressive driver may prefer driving with high acceleration whereas some
conservative drivers prefer a safer driving style. Therefore, we propose an
apprenticeship learning in combination with deep reinforcement learning
approach that allows the agent to learn the driving and stopping behaviors with
continuous actions. We use gradient inverse reinforcement learning (GIRL)
algorithm to recover the unknown reward function and employ REINFORCE as well
as Deep Deterministic Policy Gradient algorithm (DDPG) to learn the optimal
policy. The performance of our method is evaluated in simulation-based scenario
and the results demonstrate that the agent performs human like driving and even
better in some aspects after training.Comment: 7 pages, 11 figures, conferenc
Modeling driver's evasive behavior during safety-critical lane changes:Two-dimensional time-to-collision and deep reinforcement learning
Lane changes are complex driving behaviors and frequently involve
safety-critical situations. This study aims to develop a lane-change-related
evasive behavior model, which can facilitate the development of safety-aware
traffic simulations and predictive collision avoidance systems. Large-scale
connected vehicle data from the Safety Pilot Model Deployment (SPMD) program
were used for this study. A new surrogate safety measure, two-dimensional
time-to-collision (2D-TTC), was proposed to identify the safety-critical
situations during lane changes. The validity of 2D-TTC was confirmed by showing
a high correlation between the detected conflict risks and the archived
crashes. A deep deterministic policy gradient (DDPG) algorithm, which could
learn the sequential decision-making process over continuous action spaces, was
used to model the evasive behaviors in the identified safety-critical
situations. The results showed the superiority of the proposed model in
replicating both the longitudinal and lateral evasive behaviors
The Application of Driver Models in the Safety Assessment of Autonomous Vehicles: A Survey
Driver models play a vital role in developing and verifying autonomous
vehicles (AVs). Previously, they are mainly applied in traffic flow simulation
to model realistic driver behavior. With the development of AVs, driver models
attract much attention again due to their potential contributions to AV
certification. The simulation-based testing method is considered an effective
measure to accelerate AV testing due to its safe and efficient characteristics.
Nonetheless, realistic driver models are prerequisites for valid simulation
results. Additionally, an AV is assumed to be at least as safe as a careful and
competent driver. Therefore, driver models are inevitable for AV safety
assessment. However, no comparison or discussion of driver models is available
regarding their utility to AVs in the last five years despite their necessities
in the release of AVs. This motivates us to present a comprehensive survey of
driver models in the paper and compare their applicability. Requirements for
driver models in terms of their application to AV safety assessment are
discussed. A summary of driver models for simulation-based testing and AV
certification is provided. Evaluation metrics are defined to compare their
strength and weakness. Finally, an architecture for a careful and competent
driver model is proposed. Challenges and future work are elaborated. This study
gives related researchers especially regulators an overview and helps them to
define appropriate driver models for AVs
- …