3,960 research outputs found

    Traffic Optimization Through Waiting Prediction and Evolutive Algorithms

    Get PDF
    Traffic optimization systems require optimization procedures to optimize traffic light timing settings in order to improve pedestrian and vehicle mobility. Traffic simulators allow obtaining accurate estimates of traffic behavior by applying different timing configurations, but require considerable computational time to perform validation tests. For this reason, this project proposes the development of traffic optimizations based on the estimation of vehicle waiting times through the use of different prediction techniques and the use of this estimation to subsequently apply evolutionary algorithms that allow the optimizations to be carried out. The combination of these two techniques leads to a considerable reduction in calculation time, which makes it possible to apply this system at runtime. The tests have been carried out on a real traffic junction on which different traffic volumes have been applied to analyze the performance of the system

    Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning

    Full text link
    Recent advances in combining deep neural network architectures with reinforcement learning techniques have shown promising potential results in solving complex control problems with high dimensional state and action spaces. Inspired by these successes, in this paper, we build two kinds of reinforcement learning algorithms: deep policy-gradient and value-function based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The policy-gradient based agent maps its observation directly to the control signal, however the value-function based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Our methods show promising results in a traffic network simulated in the SUMO traffic simulator, without suffering from instability issues during the training process

    CoLight: Learning Network-level Cooperation for Traffic Signal Control

    Full text link
    Cooperation among the traffic signals enables vehicles to move through intersections more quickly. Conventional transportation approaches implement cooperation by pre-calculating the offsets between two intersections. Such pre-calculated offsets are not suitable for dynamic traffic environments. To enable cooperation of traffic signals, in this paper, we propose a model, CoLight, which uses graph attentional networks to facilitate communication. Specifically, for a target intersection in a network, CoLight can not only incorporate the temporal and spatial influences of neighboring intersections to the target intersection, but also build up index-free modeling of neighboring intersections. To the best of our knowledge, we are the first to use graph attentional networks in the setting of reinforcement learning for traffic signal control and to conduct experiments on the large-scale road network with hundreds of traffic signals. In experiments, we demonstrate that by learning the communication, the proposed model can achieve superior performance against the state-of-the-art methods.Comment: 10 pages. Proceedings of the 28th ACM International on Conference on Information and Knowledge Management. ACM, 201

    An Edge Based Multi-Agent Auto Communication Method for Traffic Light Control.

    Get PDF
    With smart city infrastructures growing, the Internet of Things (IoT) has been widely used in the intelligent transportation systems (ITS). The traditional adaptive traffic signal control method based on reinforcement learning (RL) has expanded from one intersection to multiple intersections. In this paper, we propose a multi-agent auto communication (MAAC) algorithm, which is an innovative adaptive global traffic light control method based on multi-agent reinforcement learning (MARL) and an auto communication protocol in edge computing architecture. The MAAC algorithm combines multi-agent auto communication protocol with MARL, allowing an agent to communicate the learned strategies with others for achieving global optimization in traffic signal control. In addition, we present a practicable edge computing architecture for industrial deployment on IoT, considering the limitations of the capabilities of network transmission bandwidth. We demonstrate that our algorithm outperforms other methods over 17% in experiments in a real traffic simulation environment
    corecore