518 research outputs found
Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation
The ability to perform effective planning is crucial for building an
instruction-following agent. When navigating through a new environment, an
agent is challenged with (1) connecting the natural language instructions with
its progressively growing knowledge of the world; and (2) performing long-range
planning and decision making in the form of effective exploration and error
correction. Current methods are still limited on both fronts despite extensive
efforts. In this paper, we introduce the Evolving Graphical Planner (EGP), a
model that performs global planning for navigation based on raw sensory input.
The model dynamically constructs a graphical representation, generalizes the
action space to allow for more flexible decision making, and performs efficient
planning on a proxy graph representation. We evaluate our model on a
challenging Vision-and-Language Navigation (VLN) task with photorealistic
images and achieve superior performance compared to previous navigation
architectures. For instance, we achieve a 53% success rate on the test split of
the Room-to-Room navigation task through pure imitation learning, outperforming
previous navigation architectures by up to 5%
Multimodal Attention Networks for Low-Level Vision-and-Language Navigation
Vision-and-Language Navigation (VLN) is a challenging task in which an agent needs to follow a language-specified path to reach a target destination. The goal gets even harder as the actions available to the agent get simpler and move towards low-level, atomic interactions with the environment. This setting takes the name of low-level VLN. In this paper, we strive for the creation of an agent able to tackle three key issues: multi-modality, long-term dependencies, and adaptability towards different locomotive settings. To that end, we devise "Perceive, Transform, and Act" (PTA): a fully-attentive VLN architecture that leaves the recurrent approach behind and the first Transformer-like architecture incorporating three different modalities -- natural language, images, and low-level actions for the agent control. In particular, we adopt an early fusion strategy to merge lingual and visual information efficiently in our encoder. We then propose to refine the decoding phase with a late fusion extension between the agent's history of actions and the perceptual modalities. We experimentally validate our model on two datasets: PTA achieves promising results in low-level VLN on R2R and achieves good performance in the recently proposed R4R benchmark. Our code is publicly available at https://github.com/aimagelab/perceive-transform-and-act
Improving Vision-and-Language Navigation by Generating Future-View Image Semantics
Vision-and-Language Navigation (VLN) is the task that requires an agent to
navigate through the environment based on natural language instructions. At
each step, the agent takes the next action by selecting from a set of navigable
locations. In this paper, we aim to take one step further and explore whether
the agent can benefit from generating the potential future view during
navigation. Intuitively, humans will have an expectation of how the future
environment will look like, based on the natural language instructions and
surrounding views, which will aid correct navigation. Hence, to equip the agent
with this ability to generate the semantics of future navigation views, we
first propose three proxy tasks during the agent's in-domain pre-training:
Masked Panorama Modeling (MPM), Masked Trajectory Modeling (MTM), and Action
Prediction with Image Generation (APIG). These three objectives teach the model
to predict missing views in a panorama (MPM), predict missing steps in the full
trajectory (MTM), and generate the next view based on the full instruction and
navigation history (APIG), respectively. We then fine-tune the agent on the VLN
task with an auxiliary loss that minimizes the difference between the view
semantics generated by the agent and the ground truth view semantics of the
next step. Empirically, our VLN-SIG achieves the new state-of-the-art on both
the Room-to-Room dataset and the CVDN dataset. We further show that our agent
learns to fill in missing patches in future views qualitatively, which brings
more interpretability over agents' predicted actions. Lastly, we demonstrate
that learning to predict future view semantics also enables the agent to have
better performance on longer paths.Comment: CVPR 2023 (Project webpage: https://jialuli-luka.github.io/VLN-SIG
- …