4 research outputs found
Talk2Nav: Long-Range Vision-and-Language Navigation with Dual Attention and Spatial Memory
The role of robots in society keeps expanding, bringing with it the necessity
of interacting and communicating with humans. In order to keep such interaction
intuitive, we provide automatic wayfinding based on verbal navigational
instructions. Our first contribution is the creation of a large-scale dataset
with verbal navigation instructions. To this end, we have developed an
interactive visual navigation environment based on Google Street View; we
further design an annotation method to highlight mined anchor landmarks and
local directions between them in order to help annotators formulate typical,
human references to those. The annotation task was crowdsourced on the AMT
platform, to construct a new Talk2Nav dataset with routes. Our second
contribution is a new learning method. Inspired by spatial cognition research
on the mental conceptualization of navigational instructions, we introduce a
soft dual attention mechanism defined over the segmented language instructions
to jointly extract two partial instructions -- one for matching the next
upcoming visual landmark and the other for matching the local directions to the
next landmark. On the similar lines, we also introduce spatial memory scheme to
encode the local directional transitions. Our work takes advantage of the
advance in two lines of research: mental formalization of verbal navigational
instructions and training neural network agents for automatic way finding.
Extensive experiments show that our method significantly outperforms previous
navigation methods. For demo video, dataset and code, please refer to our
project page: https://www.trace.ethz.ch/publications/2019/talk2nav/index.htmlComment: 20 pages, 10 Figures, Demo Video:
https://people.ee.ethz.ch/~arunv/resources/talk2nav.mp
Core Challenges in Embodied Vision-Language Planning
Recent advances in the areas of multimodal machine learning and artificial
intelligence (AI) have led to the development of challenging tasks at the
intersection of Computer Vision, Natural Language Processing, and Embodied AI.
Whereas many approaches and previous survey pursuits have characterised one or
two of these dimensions, there has not been a holistic analysis at the center
of all three. Moreover, even when combinations of these topics are considered,
more focus is placed on describing, e.g., current architectural methods, as
opposed to also illustrating high-level challenges and opportunities for the
field. In this survey paper, we discuss Embodied Vision-Language Planning
(EVLP) tasks, a family of prominent embodied navigation and manipulation
problems that jointly use computer vision and natural language. We propose a
taxonomy to unify these tasks and provide an in-depth analysis and comparison
of the new and current algorithmic approaches, metrics, simulated environments,
as well as the datasets used for EVLP tasks. Finally, we present the core
challenges that we believe new EVLP works should seek to address, and we
advocate for task construction that enables model generalizability and furthers
real-world deployment.Comment: 35 page