Article thumbnail

A New Approach for Tactical Decision Making in Lane Changing: Sample Efficient Deep Q Learning with a Safety Feedback Reward

By M. Ugur Yavas, N. Kemal Ure and Tufan Kumbasar

Abstract

Automated lane change is one of the most challenging task to be solved of highly automated vehicles due to its safety-critical, uncertain and multi-agent nature. This paper presents the novel deployment of the state of art Q learning method, namely Rainbow DQN, that uses a new safety driven rewarding scheme to tackle the issues in an dynamic and uncertain simulation environment. We present various comparative results to show that our novel approach of having reward feedback from the safety layer dramatically increases both the agent's performance and sample efficiency. Furthermore, through the novel deployment of Rainbow DQN, it is shown that more intuition about the agent's actions is extracted by examining the distributions of generated Q values of the agents. The proposed algorithm shows superior performance to the baseline algorithm in the challenging scenarios with only 200000 training steps (i.e. equivalent to 55 hours driving)

Topics: Computer Science - Artificial Intelligence, Computer Science - Machine Learning, Computer Science - Robotics
Year: 2020
OAI identifier: oai:arXiv.org:2009.11905

Suggested articles


To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.