7,351 research outputs found
A Survey on Traffic Signal Control Methods
Traffic signal control is an important and challenging real-world problem,
which aims to minimize the travel time of vehicles by coordinating their
movements at the road intersections. Current traffic signal control systems in
use still rely heavily on oversimplified information and rule-based methods,
although we now have richer data, more computing power and advanced methods to
drive the development of intelligent transportation. With the growing interest
in intelligent transportation using machine learning methods like reinforcement
learning, this survey covers the widely acknowledged transportation approaches
and a comprehensive list of recent literature on reinforcement for traffic
signal control. We hope this survey can foster interdisciplinary research on
this important topic.Comment: 32 page
Self-Organization in Traffic Lights: Evolution of Signal Control with Advances in Sensors and Communications
Traffic signals are ubiquitous devices that first appeared in 1868. Recent
advances in information and communications technology (ICT) have led to
unprecedented improvements in such areas as mobile handheld devices (i.e.,
smartphones), the electric power industry (i.e., smart grids), transportation
infrastructure, and vehicle area networks. Given the trend towards
interconnectivity, it is only a matter of time before vehicles communicate with
one another and with infrastructure. In fact, several pilots of such
vehicle-to-vehicle and vehicle-to-infrastructure (e.g. traffic lights and
parking spaces) communication systems are already operational. This survey of
autonomous and self-organized traffic signaling control has been undertaken
with these potential developments in mind. Our research results indicate that,
while many sophisticated techniques have attempted to improve the scheduling of
traffic signal control, either real-time sensing of traffic patterns or a
priori knowledge of traffic flow is required to optimize traffic. Once this is
achieved, communication between traffic signals will serve to vastly improve
overall traffic efficiency
Flow: A Modular Learning Framework for Autonomy in Traffic
The rapid development of autonomous vehicles (AVs) holds vast potential for
transportation systems through improved safety, efficiency, and access to
mobility. However, due to numerous technical, political, and human factors
challenges, new methodologies are needed to design vehicles and transportation
systems for these positive outcomes. This article tackles technical challenges
arising from the partial adoption of autonomy: partial control, partial
observation, complex multi-vehicle interactions, and the sheer variety of
traffic settings represented by real-world networks. The article presents a
modular learning framework which leverages deep Reinforcement Learning methods
to address complex traffic dynamics. Modules are composed to capture common
traffic phenomena (traffic jams, lane changing, intersections). Learned control
laws are found to exceed human driving performance by at least 40% with only
5-10% adoption of AVs. In partially-observed single-lane traffic, a small
neural network control law can eliminate stop-and-go traffic -- surpassing all
known model-based controllers, achieving near-optimal performance, and
generalizing to out-of-distribution traffic densities.Comment: 14 pages, 8 figures; new experiments and analysi
Distributed traffic light control at uncoupled intersections with real-world topology by deep reinforcement learning
This work examines the implications of uncoupled intersections with local
real-world topology and sensor setup on traffic light control approaches.
Control approaches are evaluated with respect to: Traffic flow, fuel
consumption and noise emission at intersections.
The real-world road network of Friedrichshafen is depicted, preprocessed and
the present traffic light controlled intersections are modeled with respect to
state space and action space.
Different strategies, containing fixed-time, gap-based and time-based control
approaches as well as our deep reinforcement learning based control approach,
are implemented and assessed. Our novel DRL approach allows for modeling the
TLC action space, with respect to phase selection as well as selection of
transition timings. It was found that real-world topologies, and thus
irregularly arranged intersections have an influence on the performance of
traffic light control approaches. This is even to be observed within the same
intersection types (n-arm, m-phases). Moreover we could show, that these
influences can be efficiently dealt with by our deep reinforcement learning
based control approach.Comment: 32nd Conference on Neural Information Processing Systems, within
Workshop on Machine Learning for Intelligent Transportation System
Intelligent Traffic Light Control Using Distributed Multi-agent Q Learning
The combination of Artificial Intelligence (AI) and Internet-of-Things (IoT),
which is denoted as AI-powered Internet-of-Things (AIoT), is capable of
processing huge amount of data generated from a large number of devices and
handling complex problems in social infrastructures. As AI and IoT technologies
are becoming mature, in this paper, we propose to apply AIoT technologies for
traffic light control, which is an essential component for intelligent
transportation system, to improve the efficiency of smart city's road system.
Specifically, various sensors such as surveillance cameras provide real-time
information for intelligent traffic light control system to observe the states
of both motorized traffic and non-motorized traffic. In this paper, we propose
an intelligent traffic light control solution by using distributed multi-agent
Q learning, considering the traffic information at the neighboring
intersections as well as local motorized and non-motorized traffic, to improve
the overall performance of the entire control system. By using the proposed
multi-agent Q learning algorithm, our solution is targeting to optimize both
the motorized and non-motorized traffic. In addition, we considered many
constraints/rules for traffic light control in the real world, and integrate
these constraints in the learning algorithm, which can facilitate the proposed
solution to be deployed in real operational scenarios. We conducted numerical
simulations for a real-world map with real-world traffic data. The simulation
results show that our proposed solution outperforms existing solutions in terms
of vehicle and pedestrian queue lengths, waiting time at intersections, and
many other key performance metrics
Diagnosing Reinforcement Learning for Traffic Signal Control
With the increasing availability of traffic data and advance of deep
reinforcement learning techniques, there is an emerging trend of employing
reinforcement learning (RL) for traffic signal control. A key question for
applying RL to traffic signal control is how to define the reward and state.
The ultimate objective in traffic signal control is to minimize the travel
time, which is difficult to reach directly. Hence, existing studies often
define reward as an ad-hoc weighted linear combination of several traffic
measures. However, there is no guarantee that the travel time will be optimized
with the reward. In addition, recent RL approaches use more complicated state
(e.g., image) in order to describe the full traffic situation. However, none of
the existing studies has discussed whether such a complex state representation
is necessary. This extra complexity may lead to significantly slower learning
process but may not necessarily bring significant performance gain.
In this paper, we propose to re-examine the RL approaches through the lens of
classic transportation theory. We ask the following questions: (1) How should
we design the reward so that one can guarantee to minimize the travel time? (2)
How to design a state representation which is concise yet sufficient to obtain
the optimal solution? Our proposed method LIT is theoretically supported by the
classic traffic signal control methods in transportation field. LIT has a very
simple state and reward design, thus can serve as a building block for future
RL approaches to traffic signal control. Extensive experiments on both
synthetic and real datasets show that our method significantly outperforms the
state-of-the-art traffic signal control methods
Learning Phase Competition for Traffic Signal Control
Increasingly available city data and advanced learning techniques have
empowered people to improve the efficiency of our city functions. Among them,
improving the urban transportation efficiency is one of the most prominent
topics. Recent studies have proposed to use reinforcement learning (RL) for
traffic signal control. Different from traditional transportation approaches
which rely heavily on prior knowledge, RL can learn directly from the feedback.
On the other side, without a careful model design, existing RL methods
typically take a long time to converge and the learned models may not be able
to adapt to new scenarios. For example, a model that is trained well for
morning traffic may not work for the afternoon traffic because the traffic flow
could be reversed, resulting in a very different state representation. In this
paper, we propose a novel design called FRAP, which is based on the intuitive
principle of phase competition in traffic signal control: when two traffic
signals conflict, priority should be given to one with larger traffic movement
(i.e., higher demand). Through the phase competition modeling, our model
achieves invariance to symmetrical cases such as flipping and rotation in
traffic flow. By conducting comprehensive experiments, we demonstrate that our
model finds better solutions than existing RL methods in the complicated
all-phase selection problem, converges much faster during training, and
achieves superior generalizability for different road structures and traffic
conditions
Multi-Agent Deep Reinforcement Learning for Large-scale Traffic Signal Control
Reinforcement learning (RL) is a promising data-driven approach for adaptive
traffic signal control (ATSC) in complex urban traffic networks, and deep
neural networks further enhance its learning power. However, centralized RL is
infeasible for large-scale ATSC due to the extremely high dimension of the
joint action space. Multi-agent RL (MARL) overcomes the scalability issue by
distributing the global control to each local RL agent, but it introduces new
challenges: now the environment becomes partially observable from the viewpoint
of each local agent due to limited communication among agents. Most existing
studies in MARL focus on designing efficient communication and coordination
among traditional Q-learning agents. This paper presents, for the first time, a
fully scalable and decentralized MARL algorithm for the state-of-the-art deep
RL agent: advantage actor critic (A2C), within the context of ATSC. In
particular, two methods are proposed to stabilize the learning procedure, by
improving the observability and reducing the learning difficulty of each local
agent. The proposed multi-agent A2C is compared against independent A2C and
independent Q-learning algorithms, in both a large synthetic traffic grid and a
large real-world traffic network of Monaco city, under simulated peak-hour
traffic dynamics. Results demonstrate its optimality, robustness, and sample
efficiency over other state-of-the-art decentralized MARL algorithms
Optimal Control Theory in Intelligent Transportation Systems Research - A Review
Continuous motorization and urbanization around the globe leads to an
expansion of population in major cities. Therefore, ever-growing pressure
imposed on the existing mass transit systems calls for a better technology,
Intelligent Transportation Systems (ITS), to solve many new and demanding
management issues. Many studies in the extant ITS literature attempted to
address these issues within which various research methodologies were adopted.
However, there is very few paper summarized what does optimal control theory
(OCT), one of the sharpest tools to tackle management issues in engineering, do
in solving these issues. It{\textquoteright}s both important and interesting to
answer the following two questions. (1) How does OCT contribute to ITS research
objectives? (2) What are the research gaps and possible future research
directions? We searched 11 top transportation and control journals and reviewed
41 research articles in ITS area in which OCT was used as the main research
methodology. We categorized the articles by four different ways to address our
research questions. We can conclude from the review that OCT is widely used to
address various aspects of management issues in ITS within which a large
portion of the studies aimed to reduce traffic congestion. We also critically
discussed these studies and pointed out some possible future research
directions towards which OCT can be used
Internet of Smart-Cameras for Traffic Lights Optimization in Smart Cities
Smart and decentralized control systems have recently been proposed to handle
the growing traffic congestion in urban cities. Proposed smart traffic light
solutions based on Wireless Sensor Network and Vehicular Ad-hoc NETwork are
either unreliable and inflexible or complex and costly. Furthermore, the
handling of special vehicles such as emergency is still not viable, especially
during busy hours. Inspired by the emergence of distributed smart cameras, we
present a novel approach to traffic control at intersections. Our approach uses
smart cameras at intersections along with image understanding for real-time
traffic monitoring and assessment. Besides understanding the traffic flow, the
cameras can detect and track special vehicles and help prioritize emergency
cases. Traffic violations can be identified as well and traffic statistics
collected. In this paper, we introduce a flexible, adaptive and distributed
control algorithm that uses the information provided by distributed smart
cameras to efficiently control traffic signals. Experimental results show that
our collision-free approach outperforms the state-of-the-art of the average
user's waiting time in the queue and improves the routing of emergency vehicles
in a cross congestion area.Comment: 12 page
- …