We consider the problem of cooperative exploration where multiple robots need
to cooperatively explore an unknown region as fast as possible. Multi-agent
reinforcement learning (MARL) has recently become a trending paradigm for
solving this challenge. However, existing MARL-based methods adopt
action-making steps as the metric for exploration efficiency by assuming all
the agents are acting in a fully synchronous manner: i.e., every single agent
produces an action simultaneously and every single action is executed
instantaneously at each time step. Despite its mathematical simplicity, such a
synchronous MARL formulation can be problematic for real-world robotic
applications. It can be typical that different robots may take slightly
different wall-clock times to accomplish an atomic action or even periodically
get lost due to hardware issues. Simply waiting for every robot being ready for
the next action can be particularly time-inefficient. Therefore, we propose an
asynchronous MARL solution, Asynchronous Coordination Explorer (ACE), to tackle
this real-world challenge. We first extend a classical MARL algorithm,
multi-agent PPO (MAPPO), to the asynchronous setting and additionally apply
action-delay randomization to enforce the learned policy to generalize better
to varying action delays in the real world. Moreover, each navigation agent is
represented as a team-size-invariant CNN-based policy, which greatly benefits
real-robot deployment by handling possible robot lost and allows
bandwidth-efficient intra-agent communication through low-dimensional CNN
features. We first validate our approach in a grid-based scenario. Both
simulation and real-robot results show that ACE reduces over 10% actual
exploration time compared with classical approaches. We also apply our
framework to a high-fidelity visual-based environment, Habitat, achieving 28%
improvement in exploration efficiency.Comment: This paper is accepted by AAMAS 2023. The source code can be found in
https://github.com/yang-xy20/async_mapp