1,133 research outputs found
Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder
In this paper, we present a hierarchical path planning framework called SG-RL
(subgoal graphs-reinforcement learning), to plan rational paths for agents
maneuvering in continuous and uncertain environments. By "rational", we mean
(1) efficient path planning to eliminate first-move lags; (2) collision-free
and smooth for agents with kinematic constraints satisfied. SG-RL works in a
two-level manner. At the first level, SG-RL uses a geometric path-planning
method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract
paths, also called subgoal sequences. At the second level, SG-RL uses an RL
method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal
motion-planning policies which can generate kinematically feasible and
collision-free trajectories between adjacent subgoals. The first advantage of
the proposed method is that SSG can solve the limitations of sparse reward and
local minima trap for RL agents; thus, LSPI can be used to generate paths in
complex environments. The second advantage is that, when the environment
changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to
reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI
can deal with uncertainties by exploiting its generalization ability to handle
changes in environments. Simulation experiments in representative scenarios
demonstrate that, compared with existing methods, SG-RL can work well on
large-scale maps with relatively low action-switching frequencies and shorter
path lengths, and SG-RL can deal with small changes in environments. We further
demonstrate that the design of reward functions and the types of training
environments are important factors for learning feasible policies.Comment: 20 page
Automatic Intersection Management in Mixed Traffic Using Reinforcement Learning and Graph Neural Networks
Connected automated driving has the potential to significantly improve urban
traffic efficiency, e.g., by alleviating issues due to occlusion. Cooperative
behavior planning can be employed to jointly optimize the motion of multiple
vehicles. Most existing approaches to automatic intersection management,
however, only consider fully automated traffic. In practice, mixed traffic,
i.e., the simultaneous road usage by automated and human-driven vehicles, will
be prevalent. The present work proposes to leverage reinforcement learning and
a graph-based scene representation for cooperative multi-agent planning. We
build upon our previous works that showed the applicability of such machine
learning methods to fully automated traffic. The scene representation is
extended for mixed traffic and considers uncertainty in the human drivers'
intentions. In the simulation-based evaluation, we model measurement
uncertainties through noise processes that are tuned using real-world data. The
paper evaluates the proposed method against an enhanced first in - first out
scheme, our baseline for mixed traffic management. With increasing share of
automated vehicles, the learned planner significantly increases the vehicle
throughput and reduces the delay due to interaction. Non-automated vehicles
benefit virtually alike.Comment: 8 pages, 7 figures, 34th IEEE Intelligent Vehicles Symposium (IV),
updated to accepted versio
Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning
Robots that navigate among pedestrians use collision avoidance algorithms to
enable safe and efficient operation. Recent works present deep reinforcement
learning as a framework to model the complex interactions and cooperation.
However, they are implemented using key assumptions about other agents'
behavior that deviate from reality as the number of agents in the environment
increases. This work extends our previous approach to develop an algorithm that
learns collision avoidance among a variety of types of dynamic agents without
assuming they follow any particular behavior rules. This work also introduces a
strategy using LSTM that enables the algorithm to use observations of an
arbitrary number of other agents, instead of previous methods that have a fixed
observation size. The proposed algorithm outperforms our previous approach in
simulation as the number of agents increases, and the algorithm is demonstrated
on a fully autonomous robotic vehicle traveling at human walking speed, without
the use of a 3D Lidar
- …