258 research outputs found
Learning with Latent Language
The named concepts and compositional operators present in natural language
provide a rich source of information about the kinds of abstractions humans use
to navigate the world. Can this linguistic background knowledge improve the
generality and efficiency of learned classifiers and control policies? This
paper aims to show that using the space of natural language strings as a
parameter space is an effective way to capture natural task structure. In a
pretraining phase, we learn a language interpretation model that transforms
inputs (e.g. images) into outputs (e.g. labels) given natural language
descriptions. To learn a new concept (e.g. a classifier), we search directly in
the space of descriptions to minimize the interpreter's loss on training
examples. Crucially, our models do not require language data to learn these
concepts: language is used only in pretraining to impose structure on
subsequent learning. Results on image classification, text editing, and
reinforcement learning show that, in all settings, models with a linguistic
parameterization outperform those without
Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions
A long-term goal of AI research is to build intelligent agents that can
communicate with humans in natural language, perceive the environment, and
perform real-world tasks. Vision-and-Language Navigation (VLN) is a fundamental
and interdisciplinary research topic towards this goal, and receives increasing
attention from natural language processing, computer vision, robotics, and
machine learning communities. In this paper, we review contemporary studies in
the emerging field of VLN, covering tasks, evaluation metrics, methods, etc.
Through structured analysis of current progress and challenges, we highlight
the limitations of current VLN and opportunities for future work. This paper
serves as a thorough reference for the VLN research community.Comment: 19 pages. Accepted to ACL 202
Core Challenges in Embodied Vision-Language Planning
Recent advances in the areas of multimodal machine learning and artificial
intelligence (AI) have led to the development of challenging tasks at the
intersection of Computer Vision, Natural Language Processing, and Embodied AI.
Whereas many approaches and previous survey pursuits have characterised one or
two of these dimensions, there has not been a holistic analysis at the center
of all three. Moreover, even when combinations of these topics are considered,
more focus is placed on describing, e.g., current architectural methods, as
opposed to also illustrating high-level challenges and opportunities for the
field. In this survey paper, we discuss Embodied Vision-Language Planning
(EVLP) tasks, a family of prominent embodied navigation and manipulation
problems that jointly use computer vision and natural language. We propose a
taxonomy to unify these tasks and provide an in-depth analysis and comparison
of the new and current algorithmic approaches, metrics, simulated environments,
as well as the datasets used for EVLP tasks. Finally, we present the core
challenges that we believe new EVLP works should seek to address, and we
advocate for task construction that enables model generalizability and furthers
real-world deployment.Comment: 35 page
Target-Driven Structured Transformer Planner for Vision-Language Navigation
Vision-language navigation is the task of directing an embodied agent to
navigate in 3D scenes with natural language instructions. For the agent,
inferring the long-term navigation target from visual-linguistic clues is
crucial for reliable path planning, which, however, has rarely been studied
before in literature. In this article, we propose a Target-Driven Structured
Transformer Planner (TD-STP) for long-horizon goal-guided and room layout-aware
navigation. Specifically, we devise an Imaginary Scene Tokenization mechanism
for explicit estimation of the long-term target (even located in unexplored
environments). In addition, we design a Structured Transformer Planner which
elegantly incorporates the explored room layout into a neural attention
architecture for structured and global planning. Experimental results
demonstrate that our TD-STP substantially improves previous best methods'
success rate by 2% and 5% on the test set of R2R and REVERIE benchmarks,
respectively. Our code is available at https://github.com/YushengZhao/TD-STP
Language-conditioned Learning for Robotic Manipulation: A Survey
Language-conditioned robotic manipulation represents a cutting-edge area of
research, enabling seamless communication and cooperation between humans and
robotic agents. This field focuses on teaching robotic systems to comprehend
and execute instructions conveyed in natural language. To achieve this, the
development of robust language understanding models capable of extracting
actionable insights from textual input is essential. In this comprehensive
survey, we systematically explore recent advancements in language-conditioned
approaches within the context of robotic manipulation. We analyze these
approaches based on their learning paradigms, which encompass reinforcement
learning, imitation learning, and the integration of foundational models, such
as large language models and vision-language models. Furthermore, we conduct an
in-depth comparative analysis, considering aspects like semantic information
extraction, environment & evaluation, auxiliary tasks, and task representation.
Finally, we outline potential future research directions in the realm of
language-conditioned learning for robotic manipulation, with the topic of
generalization capabilities and safety issues. The GitHub repository of this
paper can be found at
https://github.com/hk-zh/language-conditioned-robot-manipulation-model
Embodied Question Answering
We present a new AI task -- Embodied Question Answering (EmbodiedQA) -- where
an agent is spawned at a random location in a 3D environment and asked a
question ("What color is the car?"). In order to answer, the agent must first
intelligently navigate to explore the environment, gather information through
first-person (egocentric) vision, and then answer the question ("orange").
This challenging task requires a range of AI skills -- active perception,
language understanding, goal-driven navigation, commonsense reasoning, and
grounding of language into actions. In this work, we develop the environments,
end-to-end-trained reinforcement learning agents, and evaluation protocols for
EmbodiedQA.Comment: 20 pages, 13 figures, Webpage: https://embodiedqa.org
Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis
Building general-purpose robots that can operate seamlessly, in any
environment, with any object, and utilizing various skills to complete diverse
tasks has been a long-standing goal in Artificial Intelligence. Unfortunately,
however, most existing robotic systems have been constrained - having been
designed for specific tasks, trained on specific datasets, and deployed within
specific environments. These systems usually require extensively-labeled data,
rely on task-specific models, have numerous generalization issues when deployed
in real-world scenarios, and struggle to remain robust to distribution shifts.
Motivated by the impressive open-set performance and content generation
capabilities of web-scale, large-capacity pre-trained models (i.e., foundation
models) in research fields such as Natural Language Processing (NLP) and
Computer Vision (CV), we devote this survey to exploring (i) how these existing
foundation models from NLP and CV can be applied to the field of robotics, and
also exploring (ii) what a robotics-specific foundation model would look like.
We begin by providing an overview of what constitutes a conventional robotic
system and the fundamental barriers to making it universally applicable. Next,
we establish a taxonomy to discuss current work exploring ways to leverage
existing foundation models for robotics and develop ones catered to robotics.
Finally, we discuss key challenges and promising future directions in using
foundation models for enabling general-purpose robotic systems. We encourage
readers to view our living GitHub repository of resources, including papers
reviewed in this survey as well as related projects and repositories for
developing foundation models for robotics
- …