36,741 research outputs found

    Optimization and simulation of fixed-time traffic signal control in real-world applications

    Get PDF
    This paper contributes to the question how to optimize fixed-time traffic signal coordinations for real-world applications. Therefore, two models are combined: An analytically model that optimizes fixed-time plans based on a cyclically time-expanded network formulation, and a coevolutionary transport simulation that is able to evaluate the optimized fixed-time plans for large-scale realistic traffic situations. The coupling process of both models is discussed and applied to a real-world scenario. Steps that were necessary to align the models and improve the results are presented. The optimized fixed-time signals are compared to other signal approaches in the application. It is found, that they also help to improve the performance of actuated signal control

    Data-driven linear decision rule approach for distributionally robust optimization of on-line signal control

    Get PDF
    We propose a two-stage, on-line signal control strategy for dynamic networks using a linear decision rule (LDR) approach and a distributionally robust optimization (DRO) technique. The first (off-line) stage formulates a LDR that maps real-time traffic data to optimal signal control policies. A DRO problem is solved to optimize the on-line performance of the LDR in the presence of uncertainties associated with the observed traffic states and ambiguity in their underlying distribution functions. We employ a data-driven calibration of the uncertainty set, which takes into account historical traffic data. The second (on-line) stage implements a very efficient linear decision rule whose performance is guaranteed by the off-line computation. We test the proposed signal control procedure in a simulation environment that is informed by actual traffic data obtained in Glasgow, and demonstrate its full potential in on-line operation and deployability on realistic networks, as well as its effectiveness in improving traffic

    Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning

    Full text link
    Recent advances in combining deep neural network architectures with reinforcement learning techniques have shown promising potential results in solving complex control problems with high dimensional state and action spaces. Inspired by these successes, in this paper, we build two kinds of reinforcement learning algorithms: deep policy-gradient and value-function based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The policy-gradient based agent maps its observation directly to the control signal, however the value-function based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Our methods show promising results in a traffic network simulated in the SUMO traffic simulator, without suffering from instability issues during the training process
    • …
    corecore