18,869 research outputs found
Dissipation of stop-and-go waves via control of autonomous vehicles: Field experiments
Traffic waves are phenomena that emerge when the vehicular density exceeds a
critical threshold. Considering the presence of increasingly automated vehicles
in the traffic stream, a number of research activities have focused on the
influence of automated vehicles on the bulk traffic flow. In the present
article, we demonstrate experimentally that intelligent control of an
autonomous vehicle is able to dampen stop-and-go waves that can arise even in
the absence of geometric or lane changing triggers. Precisely, our experiments
on a circular track with more than 20 vehicles show that traffic waves emerge
consistently, and that they can be dampened by controlling the velocity of a
single vehicle in the flow. We compare metrics for velocity, braking events,
and fuel economy across experiments. These experimental findings suggest a
paradigm shift in traffic management: flow control will be possible via a few
mobile actuators (less than 5%) long before a majority of vehicles have
autonomous capabilities
Multi-agent Reinforcement Learning for Cooperative Lane Changing of Connected and Autonomous Vehicles in Mixed Traffic
Autonomous driving has attracted significant research interests in the past
two decades as it offers many potential benefits, including releasing drivers
from exhausting driving and mitigating traffic congestion, among others.
Despite promising progress, lane-changing remains a great challenge for
autonomous vehicles (AV), especially in mixed and dynamic traffic scenarios.
Recently, reinforcement learning (RL), a powerful data-driven control method,
has been widely explored for lane-changing decision makings in AVs with
encouraging results demonstrated. However, the majority of those studies are
focused on a single-vehicle setting, and lane-changing in the context of
multiple AVs coexisting with human-driven vehicles (HDVs) have received scarce
attention. In this paper, we formulate the lane-changing decision making of
multiple AVs in a mixed-traffic highway environment as a multi-agent
reinforcement learning (MARL) problem, where each AV makes lane-changing
decisions based on the motions of both neighboring AVs and HDVs. Specifically,
a multi-agent advantage actor-critic network (MA2C) is developed with a novel
local reward design and a parameter sharing scheme. In particular, a
multi-objective reward function is proposed to incorporate fuel efficiency,
driving comfort, and safety of autonomous driving. Comprehensive experimental
results, conducted under three different traffic densities and various levels
of human driver aggressiveness, show that our proposed MARL framework
consistently outperforms several state-of-the-art benchmarks in terms of
efficiency, safety and driver comfort.Comment: This paper was published on Autonomous Intelligent Systems (Volume 2,
article number 5, 2022
AutonoVi: Autonomous Vehicle Planning with Dynamic Maneuvers and Traffic Constraints
We present AutonoVi:, a novel algorithm for autonomous vehicle navigation
that supports dynamic maneuvers and satisfies traffic constraints and norms.
Our approach is based on optimization-based maneuver planning that supports
dynamic lane-changes, swerving, and braking in all traffic scenarios and guides
the vehicle to its goal position. We take into account various traffic
constraints, including collision avoidance with other vehicles, pedestrians,
and cyclists using control velocity obstacles. We use a data-driven approach to
model the vehicle dynamics for control and collision avoidance. Furthermore,
our trajectory computation algorithm takes into account traffic rules and
behaviors, such as stopping at intersections and stoplights, based on an
arc-spline representation. We have evaluated our algorithm in a simulated
environment and tested its interactive performance in urban and highway driving
scenarios with tens of vehicles, pedestrians, and cyclists. These scenarios
include jaywalking pedestrians, sudden stops from high speeds, safely passing
cyclists, a vehicle suddenly swerving into the roadway, and high-density
traffic where the vehicle must change lanes to progress more effectively.Comment: 9 pages, 6 figure
Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving
Tactical decision making for autonomous driving is challenging due to the
diversity of environments, the uncertainty in the sensor information, and the
complex interaction with other road users. This paper introduces a general
framework for tactical decision making, which combines the concepts of planning
and learning, in the form of Monte Carlo tree search and deep reinforcement
learning. The method is based on the AlphaGo Zero algorithm, which is extended
to a domain with a continuous state space where self-play cannot be used. The
framework is applied to two different highway driving cases in a simulated
environment and it is shown to perform better than a commonly used baseline
method. The strength of combining planning and learning is also illustrated by
a comparison to using the Monte Carlo tree search or the neural network policy
separately
An Agent-based Modelling Framework for Driving Policy Learning in Connected and Autonomous Vehicles
Due to the complexity of the natural world, a programmer cannot foresee all
possible situations, a connected and autonomous vehicle (CAV) will face during
its operation, and hence, CAVs will need to learn to make decisions
autonomously. Due to the sensing of its surroundings and information exchanged
with other vehicles and road infrastructure, a CAV will have access to large
amounts of useful data. While different control algorithms have been proposed
for CAVs, the benefits brought about by connectedness of autonomous vehicles to
other vehicles and to the infrastructure, and its implications on policy
learning has not been investigated in literature. This paper investigates a
data driven driving policy learning framework through an agent-based modelling
approaches. The contributions of the paper are two-fold. A dynamic programming
framework is proposed for in-vehicle policy learning with and without
connectivity to neighboring vehicles. The simulation results indicate that
while a CAV can learn to make autonomous decisions, vehicle-to-vehicle (V2V)
communication of information improves this capability. Furthermore, to overcome
the limitations of sensing in a CAV, the paper proposes a novel concept for
infrastructure-led policy learning and communication with autonomous vehicles.
In infrastructure-led policy learning, road-side infrastructure senses and
captures successful vehicle maneuvers and learns an optimal policy from those
temporal sequences, and when a vehicle approaches the road-side unit, the
policy is communicated to the CAV. Deep-imitation learning methodology is
proposed to develop such an infrastructure-led policy learning framework
- …