251 research outputs found

    Improving Traffic Efficiency in a Road Network by Adopting Decentralised Multi-Agent Reinforcement Learning and Smart Navigation

    Get PDF
    In the future, mixed traffic flow will consist of human-driven vehicles (HDVs) and connected autonomous vehicles (CAVs). Effective traffic management is a global challenge, especially in urban areas with many intersections. Much research has focused on solving this problem to increase intersection network performance. Reinforcement learning (RL) is a new approach to optimising traffic signal lights that overcomes the disadvantages of traditional methods. In this paper, we propose an integrated approach that combines the multi-agent advantage actor-critic (MA-A2C) and smart navigation (SN) to solve the congestion problem in a road network under mixed traffic conditions. The A2C algorithm combines the advantages of value-based and policy-based methods to stabilise the training by reducing the variance. It also overcomes the limitations of centralised and independent MARL. In addition, the SN technique reroutes traffic load to alternate paths to avoid congestion at intersections. To evaluate the robustness of our approach, we compare our model against independent-A2C (I-A2C) and max pressure (MP). These results show that our proposed approach performs more efficiently than others regarding average waiting time, speed and queue length. In addition, the simulation results also suggest that the model is effective as the CAV penetration rate is greater than 20%

    RLPG: Reinforcement Learning Approach for Dynamic Intra-Platoon Gap Adaptation for Highway On-Ramp Merging

    Full text link
    A platoon refers to a group of vehicles traveling together in very close proximity using automated driving technology. Owing to its immense capacity to improve fuel efficiency, driving safety, and driver comfort, platooning technology has garnered substantial attention from the autonomous vehicle research community. Although highly advantageous, recent research has uncovered that an excessively small intra-platoon gap can impede traffic flow during highway on-ramp merging. While existing control-based methods allow for adaptation of the intra-platoon gap to improve traffic flow, making an optimal control decision under the complex dynamics of traffic conditions remains a challenge due to the massive computational complexity. In this paper, we present the design, implementation, and evaluation of a novel reinforcement learning framework that adaptively adjusts the intra-platoon gap of an individual platoon member to maximize traffic flow in response to dynamically changing, complex traffic conditions for highway on-ramp merging. The framework's state space has been meticulously designed in consultation with the transportation literature to take into account critical traffic parameters that bear direct relevance to merging efficiency. An intra-platoon gap decision making method based on the deep deterministic policy gradient algorithm is created to incorporate the continuous action space to ensure precise and continuous adaptation of the intra-platoon gap. An extensive simulation study demonstrates the effectiveness of the reinforcement learning-based approach for significantly improving traffic flow in various highway on-ramp merging scenarios

    Deep Reinforcement Learning Approach for Lagrangian Control: Improving Freeway Bottleneck Throughput Via Variable Speed Limit

    Get PDF
    Connected vehicles (CVs) will enable new applications to improve traffic flow. The focus of this dissertation is to investigate how reinforcement learning (RL) control for the variable speed limit (VSL) through CVs can be generalized to improve traffic flow at different freeway bottlenecks. Three different bottlenecks are investigated: A sag curve, where the gradient changes from negative to positive values causes a reduction in the roadway capacity and congestion; a lane reduction, where three lanes merge to two lanes and cause congestion, and finally, an on-ramp, where increase in demand on a multilane freeway causes capacity drop. An RL algorithm is developed and implemented in a simulation environment for controlling a VSL in the upstream to manipulate the inflow of vehicles to the bottleneck on a freeway to minimize delays and increase the throughput. CVs are assumed to receive VSL messages through Infrastructure-to-Vehicle (I2V) communications technologies. Asynchronous Advantage Actor-Critic (A3C) algorithms are developed for each bottleneck to determine optimal VSL policies. Through these RL control algorithms, the speed of CVs are manipulated in the upstream of the bottleneck to avoid or minimize congestion. Various market penetration rates for CVs are considered in the simulations. It is demonstrated that the RL algorithm is able to adapt to stochastic arrivals of CVs and achieve significant improvements even at low market penetration rates of CVs, and the RL algorithm is able to find solution for all three bottlenecks. The results also show that the RL-based solutions outperform feedback-control-based solutions

    Towards Robust Deep Reinforcement Learning for Traffic Signal Control: Demand Surges, Incidents and Sensor Failures

    Full text link
    Reinforcement learning (RL) constitutes a promising solution for alleviating the problem of traffic congestion. In particular, deep RL algorithms have been shown to produce adaptive traffic signal controllers that outperform conventional systems. However, in order to be reliable in highly dynamic urban areas, such controllers need to be robust with the respect to a series of exogenous sources of uncertainty. In this paper, we develop an open-source callback-based framework for promoting the flexible evaluation of different deep RL configurations under a traffic simulation environment. With this framework, we investigate how deep RL-based adaptive traffic controllers perform under different scenarios, namely under demand surges caused by special events, capacity reductions from incidents and sensor failures. We extract several key insights for the development of robust deep RL algorithms for traffic control and propose concrete designs to mitigate the impact of the considered exogenous uncertainties.Comment: 8 page

    Routing in optical transport networks with deep reinforcement learning

    Get PDF
    Deep reinforcement learning (DRL) has recently revolutionized the resolution of decision-making and automated control problems. In the context of networking, there is a growing trend in the research community to apply DRL algorithms to optimization problems such as routing. However, existing proposals fail to achieve good results, often under-performing traditional routing techniques. We argue that the reason behind this poor performance is that they use straightforward representations of networks. In this paper, we propose a DRL-based solution for routing in optical transport networks (OTNs). Contrary to previous works, we propose a more elaborate representation of the network state that reduces the level of knowledge abstraction required for DRL agents and easily captures the singularities of network topologies. Our evaluation results show that using our novel representation, DRL agents achieve better performance and learn how to route traffic in OTNs significantly faster compared to state-of-the-art representations. Additionally, we reverse engineered the routing strategy learned by our DRL agent, and as a result, we found a routing algorithm that outperforms well-known traditional routing heuristics.Peer ReviewedPostprint (author's final draft
    • …
    corecore