6 research outputs found

    LibSignal: An Open Library for Traffic Signal Control

    Full text link
    This paper introduces a library for cross-simulator comparison of reinforcement learning models in traffic signal control tasks. This library is developed to implement recent state-of-the-art reinforcement learning models with extensible interfaces and unified cross-simulator evaluation metrics. It supports commonly-used simulators in traffic signal control tasks, including Simulation of Urban MObility(SUMO) and CityFlow, and multiple benchmark datasets for fair comparisons. We conducted experiments to validate our implementation of the models and to calibrate the simulators so that the experiments from one simulator could be referential to the other. Based on the validated models and calibrated environments, this paper compares and reports the performance of current state-of-the-art RL algorithms across different datasets and simulators. This is the first time that these methods have been compared fairly under the same datasets with different simulators.Comment: 11 pages + 6 pages appendix. Accepted by NeurIPS 2022 Workshop: Reinforcement Learning for Real Life. Website: https://darl-libsignal.github.io

    Adaptive Coordination Offsets for Signalized Arterial Intersections using Deep Reinforcement Learning

    Full text link
    One of the most critical components of an urban transportation system is the coordination of intersections in arterial networks. With the advent of data-driven approaches for traffic control systems, deep reinforcement learning (RL) has gained significant traction in traffic control research. Proposed deep RL solutions to traffic control are designed to directly modify either phase order or timings; such approaches can lead to unfair situations -- bypassing low volume links for several cycles -- in the name of optimizing traffic flow. To address the issues and feasibility of the present approach, we propose a deep RL framework that dynamically adjusts the offsets based on traffic states and preserves the planned phase timings and order derived from model-based methods. This framework allows us to improve arterial coordination while preserving the notion of fairness for competing streams of traffic in an intersection. Using a validated and calibrated traffic model, we trained the policy of a deep RL agent that aims to reduce travel delays in the network. We evaluated the resulting policy by comparing its performance against the phase offsets obtained by a state-of-the-practice baseline, SYNCHRO. The resulting policy dynamically readjusts phase offsets in response to changes in traffic demand. Simulation results show that the proposed deep RL agent outperformed SYNCHRO on average, effectively reducing delay time by 13.21% in the AM Scenario, 2.42% in the noon scenario, and 6.2% in the PM scenario. Finally, we also show the robustness of our agent to extreme traffic conditions, such as demand surges and localized traffic incidents
    corecore