Multi-Agent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC)

Abstract

Traffic congestion in Greater Toronto Area costs Canada 6billion/yearandisexpectedtogrowupto 6 billion /year and is expected to grow up to 15 billion /year in the next few decades. Adaptive Traffic Signal Control(ATSC) is a promising technique to alleviate traffic congestion. For medium-large transportation networks, coordinated ATSC is becoming a challenging problem because the number of system states and actions grows exponentially as the number of networked intersections grows. Efficient and robust controllers can be designed using a multi-agent reinforcement learning (MARL) approach in which each controller (agent) is responsible for the control of traffic lights around a single traffic junction. This paper presents a novel, decentralized and coordinated adaptive real-time traffic signal control system using Multi-Agent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLINATSC) that aims to minimize the total vehicle delay in the traffic network. The system is tested using microscopic traffic simulation software (PARAMICS) on a network of 5 signalized intersections in Downtown Toronto. The performance of MARLIN-ATSC is compared against two approaches: the conventional pretimed signal control (B1) and independent RL-based control agents (B2), i.e. with no coordination. The results show that network-wide average delay savings range from 32% to 63% relative to B1 and from 7% to 12% relative to B2 under different demand levels and arrival profiles

    Similar works

    Full text

    thumbnail-image

    Available Versions