Recently emerged Vision-and-Language Navigation (VLN) tasks have drawn
significant attention in both computer vision and natural language processing
communities. Existing VLN tasks are built for agents that navigate on the
ground, either indoors or outdoors. However, many tasks require intelligent
agents to carry out in the sky, such as UAV-based goods delivery,
traffic/security patrol, and scenery tour, to name a few. Navigating in the sky
is more complicated than on the ground because agents need to consider the
flying height and more complex spatial relationship reasoning. To fill this gap
and facilitate research in this field, we propose a new task named AerialVLN,
which is UAV-based and towards outdoor environments. We develop a 3D simulator
rendered by near-realistic pictures of 25 city-level scenarios. Our simulator
supports continuous navigation, environment extension and configuration. We
also proposed an extended baseline model based on the widely-used
cross-modal-alignment (CMA) navigation methods. We find that there is still a
significant gap between the baseline model and human performance, which
suggests AerialVLN is a new challenging task. Dataset and code is available at
https://github.com/AirVLN/AirVLN.Comment: Accepted by ICCV 202