Vision-and-Language Navigation (VLN) requires the agent to follow language
instructions to navigate through 3D environments. One main challenge in VLN is
the limited availability of photorealistic training environments, which makes
it hard to generalize to new and unseen environments. To address this problem,
we propose PanoGen, a generation method that can potentially create an infinite
number of diverse panoramic environments conditioned on text. Specifically, we
collect room descriptions by captioning the room images in existing
Matterport3D environments, and leverage a state-of-the-art text-to-image
diffusion model to generate the new panoramic environments. We use recursive
outpainting over the generated images to create consistent 360-degree panorama
views. Our new panoramic environments share similar semantic information with
the original environments by conditioning on text descriptions, which ensures
the co-occurrence of objects in the panorama follows human intuition, and
creates enough diversity in room appearance and layout with image outpainting.
Lastly, we explore two ways of utilizing PanoGen in VLN pre-training and
fine-tuning. We generate instructions for paths in our PanoGen environments
with a speaker built on a pre-trained vision-and-language model for VLN
pre-training, and augment the visual observation with our panoramic
environments during agents' fine-tuning to avoid overfitting to seen
environments. Empirically, learning with our PanoGen environments achieves the
new state-of-the-art on the Room-to-Room, Room-for-Room, and CVDN datasets.
Pre-training with our PanoGen speaker data is especially effective for CVDN,
which has under-specified instructions and needs commonsense knowledge. Lastly,
we show that the agent can benefit from training with more generated panoramic
environments, suggesting promising results for scaling up the PanoGen
environments.Comment: Project Webpage: https://pano-gen.github.io