7 research outputs found

    Adaptive Coordination Offsets for Signalized Arterial Intersections using Deep Reinforcement Learning

    Full text link
    One of the most critical components of an urban transportation system is the coordination of intersections in arterial networks. With the advent of data-driven approaches for traffic control systems, deep reinforcement learning (RL) has gained significant traction in traffic control research. Proposed deep RL solutions to traffic control are designed to directly modify either phase order or timings; such approaches can lead to unfair situations -- bypassing low volume links for several cycles -- in the name of optimizing traffic flow. To address the issues and feasibility of the present approach, we propose a deep RL framework that dynamically adjusts the offsets based on traffic states and preserves the planned phase timings and order derived from model-based methods. This framework allows us to improve arterial coordination while preserving the notion of fairness for competing streams of traffic in an intersection. Using a validated and calibrated traffic model, we trained the policy of a deep RL agent that aims to reduce travel delays in the network. We evaluated the resulting policy by comparing its performance against the phase offsets obtained by a state-of-the-practice baseline, SYNCHRO. The resulting policy dynamically readjusts phase offsets in response to changes in traffic demand. Simulation results show that the proposed deep RL agent outperformed SYNCHRO on average, effectively reducing delay time by 13.21% in the AM Scenario, 2.42% in the noon scenario, and 6.2% in the PM scenario. Finally, we also show the robustness of our agent to extreme traffic conditions, such as demand surges and localized traffic incidents

    Improving Traffic Efficiency in a Road Network by Adopting Decentralised Multi-Agent Reinforcement Learning and Smart Navigation

    Get PDF
    In the future, mixed traffic flow will consist of human-driven vehicles (HDVs) and connected autonomous vehicles (CAVs). Effective traffic management is a global challenge, especially in urban areas with many intersections. Much research has focused on solving this problem to increase intersection network performance. Reinforcement learning (RL) is a new approach to optimising traffic signal lights that overcomes the disadvantages of traditional methods. In this paper, we propose an integrated approach that combines the multi-agent advantage actor-critic (MA-A2C) and smart navigation (SN) to solve the congestion problem in a road network under mixed traffic conditions. The A2C algorithm combines the advantages of value-based and policy-based methods to stabilise the training by reducing the variance. It also overcomes the limitations of centralised and independent MARL. In addition, the SN technique reroutes traffic load to alternate paths to avoid congestion at intersections. To evaluate the robustness of our approach, we compare our model against independent-A2C (I-A2C) and max pressure (MP). These results show that our proposed approach performs more efficiently than others regarding average waiting time, speed and queue length. In addition, the simulation results also suggest that the model is effective as the CAV penetration rate is greater than 20%

    Cooperative Deep Q-Learning With Q-Value Transfer for Multi-Intersection Signal Control

    No full text
    corecore