Realtime Vehicle Route Optimisation via DQN for Sustainable and Resilient Urban Transportation Network

Abstract

Traffic congestion has become one of the most serious contemporary city issues for urban transportation network as it leads to unnecessary high energy consumption, air pollution and extra travelling time. During this decade, many optimization algorithms have been designed to achieve the optimal usage of existing roadway capacity in cities to leverage the problem. However, it is still a challenging task for the vehicles to interact with the complex city environment in a real time manner. In this thesis, we propose a deep reinforcement learning (DRL) method to build a real-time intelligent vehicle navigation system for sustainable and resilient urban transportation network. We designed two rewards methods travel time based and vehicle emissions impact (VEI) based which aim to reduce the travel time for emergency vehicle (resilience), and reduce vehicle emissions for general vehicle (sustainability). In the experiment, several realistic traffic scenarios are simulated by SUMO to test the proposed navigation method. The experimental results have demonstrated the efficient convergence of the vehicle navigation agents and their effectiveness to make optimal decisions under the volatile traffic conditions. Travel time based reward schema perform better in reducing travel time however VEI based show better result in reducing vehicle emissions. Furthermore, the results also show that the proposed method has huge potential to provide a better navigation solution comparing with the benchmark routing optimisation algorithms

    Similar works