1,364 research outputs found
Multi-Agent Reinforcement Learning for Connected and Automated Vehicles Control: Recent Advancements and Future Prospects
Connected and automated vehicles (CAVs) have emerged as a potential solution
to the future challenges of developing safe, efficient, and eco-friendly
transportation systems. However, CAV control presents significant challenges,
given the complexity of interconnectivity and coordination required among the
vehicles. To address this, multi-agent reinforcement learning (MARL), with its
notable advancements in addressing complex problems in autonomous driving,
robotics, and human-vehicle interaction, has emerged as a promising tool for
enhancing the capabilities of CAVs. However, there is a notable absence of
current reviews on the state-of-the-art MARL algorithms in the context of CAVs.
Therefore, this paper delivers a comprehensive review of the application of
MARL techniques within the field of CAV control. The paper begins by
introducing MARL, followed by a detailed explanation of its unique advantages
in addressing complex mobility and traffic scenarios that involve multiple
agents. It then presents a comprehensive survey of MARL applications on the
extent of control dimensions for CAVs, covering critical and typical scenarios
such as platooning control, lane-changing, and unsignalized intersections. In
addition, the paper provides a comprehensive review of the prominent simulation
platforms used to create reliable environments for training in MARL. Lastly,
the paper examines the current challenges associated with deploying MARL within
CAV control and outlines potential solutions that can effectively overcome
these issues. Through this review, the study highlights the tremendous
potential of MARL to enhance the performance and collaboration of CAV control
in terms of safety, travel efficiency, and economy
Danger-aware Adaptive Composition of DRL Agents for Self-navigation
Self-navigation, referred as the capability of automatically reaching the
goal while avoiding collisions with obstacles, is a fundamental skill required
for mobile robots. Recently, deep reinforcement learning (DRL) has shown great
potential in the development of robot navigation algorithms. However, it is
still difficult to train the robot to learn goal-reaching and
obstacle-avoidance skills simultaneously. On the other hand, although many
DRL-based obstacle-avoidance algorithms are proposed, few of them are reused
for more complex navigation tasks. In this paper, a novel danger-aware adaptive
composition (DAAC) framework is proposed to combine two individually
DRL-trained agents, obstacle-avoidance and goal-reaching, to construct a
navigation agent without any redesigning and retraining. The key to this
adaptive composition approach is that the value function outputted by the
obstacle-avoidance agent serves as an indicator for evaluating the risk level
of the current situation, which in turn determines the contribution of these
two agents for the next move. Simulation and real-world testing results show
that the composed Navigation network can control the robot to accomplish
difficult navigation tasks, e.g., reaching a series of successive goals in an
unknown and complex environment safely and quickly.Comment: 7 pages, 9 figure
Bounded Distributed Flocking Control of Nonholonomic Mobile Robots
There have been numerous studies on the problem of flocking control for
multiagent systems whose simplified models are presented in terms of point-mass
elements. Meanwhile, full dynamic models pose some challenging problems in
addressing the flocking control problem of mobile robots due to their
nonholonomic dynamic properties. Taking practical constraints into
consideration, we propose a novel approach to distributed flocking control of
nonholonomic mobile robots by bounded feedback. The flocking control objectives
consist of velocity consensus, collision avoidance, and cohesion maintenance
among mobile robots. A flocking control protocol which is based on the
information of neighbor mobile robots is constructed. The theoretical analysis
is conducted with the help of a Lyapunov-like function and graph theory.
Simulation results are shown to demonstrate the efficacy of the proposed
distributed flocking control scheme
Large network multi-level control for CAV and Smart Infrastructure: AI-based Fog-Cloud collaboration
Survey of Recent Multi-Agent Reinforcement Learning Algorithms Utilizing Centralized Training
Much work has been dedicated to the exploration of Multi-Agent Reinforcement
Learning (MARL) paradigms implementing a centralized learning with
decentralized execution (CLDE) approach to achieve human-like collaboration in
cooperative tasks. Here, we discuss variations of centralized training and
describe a recent survey of algorithmic approaches. The goal is to explore how
different implementations of information sharing mechanism in centralized
learning may give rise to distinct group coordinated behaviors in multi-agent
systems performing cooperative tasks.Comment: This article appeared in the news at:
https://www.army.mil/article/247261/army_researchers_develop_innovative_framework_for_training_a
- …