167 research outputs found
Intelligent Escape of Robotic Systems: A Survey of Methodologies, Applications, and Challenges
Intelligent escape is an interdisciplinary field that employs artificial
intelligence (AI) techniques to enable robots with the capacity to
intelligently react to potential dangers in dynamic, intricate, and
unpredictable scenarios. As the emphasis on safety becomes increasingly
paramount and advancements in robotic technologies continue to advance, a wide
range of intelligent escape methodologies has been developed in recent years.
This paper presents a comprehensive survey of state-of-the-art research work on
intelligent escape of robotic systems. Four main methods of intelligent escape
are reviewed, including planning-based methodologies, partitioning-based
methodologies, learning-based methodologies, and bio-inspired methodologies.
The strengths and limitations of existing methods are summarized. In addition,
potential applications of intelligent escape are discussed in various domains,
such as search and rescue, evacuation, military security, and healthcare. In an
effort to develop new approaches to intelligent escape, this survey identifies
current research challenges and provides insights into future research trends
in intelligent escape.Comment: This paper is accepted by Journal of Intelligent and Robotic System
Learning Multi-Pursuit Evasion for Safe Targeted Navigation of Drones
Safe navigation of drones in the presence of adversarial physical attacks
from multiple pursuers is a challenging task. This paper proposes a novel
approach, asynchronous multi-stage deep reinforcement learning (AMS-DRL), to
train adversarial neural networks that can learn from the actions of multiple
evolved pursuers and adapt quickly to their behavior, enabling the drone to
avoid attacks and reach its target. Specifically, AMS-DRL evolves adversarial
agents in a pursuit-evasion game where the pursuers and the evader are
asynchronously trained in a bipartite graph way during multiple stages. Our
approach guarantees convergence by ensuring Nash equilibrium among agents from
the game-theory analysis. We evaluate our method in extensive simulations and
show that it outperforms baselines with higher navigation success rates. We
also analyze how parameters such as the relative maximum speed affect
navigation performance. Furthermore, we have conducted physical experiments and
validated the effectiveness of the trained policies in real-time flights. A
success rate heatmap is introduced to elucidate how spatial geometry influences
navigation outcomes. Project website:
https://github.com/NTU-ICG/AMS-DRL-for-Pursuit-Evasion.Comment: Accepted by IEEE Transactions on Artificial Intelligenc
Hierarchical Multi-Agent Reinforcement Learning for Air Combat Maneuvering
The application of artificial intelligence to simulate air-to-air combat
scenarios is attracting increasing attention. To date the high-dimensional
state and action spaces, the high complexity of situation information (such as
imperfect and filtered information, stochasticity, incomplete knowledge about
mission targets) and the nonlinear flight dynamics pose significant challenges
for accurate air combat decision-making. These challenges are exacerbated when
multiple heterogeneous agents are involved. We propose a hierarchical
multi-agent reinforcement learning framework for air-to-air combat with
multiple heterogeneous agents. In our framework, the decision-making process is
divided into two stages of abstraction, where heterogeneous low-level policies
control the action of individual units, and a high-level commander policy
issues macro commands given the overall mission targets. Low-level policies are
trained for accurate unit combat control. Their training is organized in a
learning curriculum with increasingly complex training scenarios and
league-based self-play. The commander policy is trained on mission targets
given pre-trained low-level policies. The empirical validation advocates the
advantages of our design choices.Comment: 22nd International Conference on Machine Learning and Applications
(ICMLA 23
On the role and opportunities in teamwork design for advanced multi-robot search systems
Intelligent robotic systems are becoming ever more present in our lives across a multitude of domains such as industry, transportation, agriculture, security, healthcare and even education. Such systems enable humans to focus on the interesting and sophisticated tasks while robots accomplish tasks that are either too tedious, routine or potentially dangerous for humans to do. Recent advances in perception technologies and accompanying hardware, mainly attributed to rapid advancements in the deep-learning ecosystem, enable the deployment of robotic systems equipped with onboard sensors as well as the computational power to perform autonomous reasoning and decision making online. While there has been significant progress in expanding the capabilities of single and multi-robot systems during the last decades across a multitude of domains and applications, there are still many promising areas for research that can advance the state of cooperative searching systems that employ multiple robots. In this article, several prospective avenues of research in teamwork cooperation with considerable potential for advancement of multi-robot search systems will be visited and discussed. In previous works we have shown that multi-agent search tasks can greatly benefit from intelligent cooperation between team members and can achieve performance close to the theoretical optimum. The techniques applied can be used in a variety of domains including planning against adversarial opponents, control of forest fires and coordinating search-and-rescue missions. The state-of-the-art on methods of multi-robot search across several selected domains of application is explained, highlighting the pros and cons of each method, providing an up-to-date view on the current state of the domains and their future challenges
Multi-Agent Reinforcement Learning for the Low-Level Control of a Quadrotor UAV
This paper presents multi-agent reinforcement learning frameworks for the
low-level control of a quadrotor UAV. While single-agent reinforcement learning
has been successfully applied to quadrotors, training a single monolithic
network is often data-intensive and time-consuming. To address this, we
decompose the quadrotor dynamics into the translational dynamics and the yawing
dynamics, and assign a reinforcement learning agent to each part for efficient
training and performance improvements. The proposed multi-agent framework for
quadrotor low-level control that leverages the underlying structures of the
quadrotor dynamics is a unique contribution. Further, we introduce
regularization terms to mitigate steady-state errors and to avoid aggressive
control inputs. Through benchmark studies with sim-to-sim transfer, it is
illustrated that the proposed multi-agent reinforcement learning substantially
improves the convergence rate of the training and the stability of the
controlled dynamics.Comment: 8 pages, 6 figures, 3 table
Vision-based Learning for Drones: A Survey
Drones as advanced cyber-physical systems are undergoing a transformative
shift with the advent of vision-based learning, a field that is rapidly gaining
prominence due to its profound impact on drone autonomy and functionality.
Different from existing task-specific surveys, this review offers a
comprehensive overview of vision-based learning in drones, emphasizing its
pivotal role in enhancing their operational capabilities under various
scenarios. We start by elucidating the fundamental principles of vision-based
learning, highlighting how it significantly improves drones' visual perception
and decision-making processes. We then categorize vision-based control methods
into indirect, semi-direct, and end-to-end approaches from the
perception-control perspective. We further explore various applications of
vision-based drones with learning capabilities, ranging from single-agent
systems to more complex multi-agent and heterogeneous system scenarios, and
underscore the challenges and innovations characterizing each area. Finally, we
explore open questions and potential solutions, paving the way for ongoing
research and development in this dynamic and rapidly evolving field. With
growing large language models (LLMs) and embodied intelligence, vision-based
learning for drones provides a promising but challenging road towards
artificial general intelligence (AGI) in 3D physical world
- …