172 research outputs found
Visual Question Answering: A Survey of Methods and Datasets
Visual Question Answering (VQA) is a challenging task that has received
increasing attention from both the computer vision and the natural language
processing communities. Given an image and a question in natural language, it
requires reasoning over visual elements of the image and general knowledge to
infer the correct answer. In the first part of this survey, we examine the
state of the art by comparing modern approaches to the problem. We classify
methods by their mechanism to connect the visual and textual modalities. In
particular, we examine the common approach of combining convolutional and
recurrent neural networks to map images and questions to a common feature
space. We also discuss memory-augmented and modular architectures that
interface with structured knowledge bases. In the second part of this survey,
we review the datasets available for training and evaluating VQA systems. The
various datatsets contain questions at different levels of complexity, which
require different capabilities and types of reasoning. We examine in depth the
question/answer pairs from the Visual Genome project, and evaluate the
relevance of the structured annotations of images with scene graphs for VQA.
Finally, we discuss promising future directions for the field, in particular
the connection to structured knowledge bases and the use of natural language
processing models.Comment: 25 page
Learning Visual Question Answering by Bootstrapping Hard Attention
Attention mechanisms in biological perception are thought to select subsets
of perceptual information for more sophisticated processing which would be
prohibitive to perform on all sensory inputs. In computer vision, however,
there has been relatively little exploration of hard attention, where some
information is selectively ignored, in spite of the success of soft attention,
where information is re-weighted and aggregated, but never filtered out. Here,
we introduce a new approach for hard attention and find it achieves very
competitive performance on a recently-released visual question answering
datasets, equalling and in some cases surpassing similar soft attention
architectures while entirely ignoring some features. Even though the hard
attention mechanism is thought to be non-differentiable, we found that the
feature magnitudes correlate with semantic relevance, and provide a useful
signal for our mechanism's attentional selection criterion. Because hard
attention selects important features of the input information, it can also be
more efficient than analogous soft attention mechanisms. This is especially
important for recent approaches that use non-local pairwise operations, whereby
computational and memory costs are quadratic in the size of the set of
features.Comment: ECCV 201
Learning and Reasoning for Robot Sequential Decision Making under Uncertainty
Robots frequently face complex tasks that require more than one action, where
sequential decision-making (SDM) capabilities become necessary. The key
contribution of this work is a robot SDM framework, called LCORPP, that
supports the simultaneous capabilities of supervised learning for passive state
estimation, automated reasoning with declarative human knowledge, and planning
under uncertainty toward achieving long-term goals. In particular, we use a
hybrid reasoning paradigm to refine the state estimator, and provide
informative priors for the probabilistic planner. In experiments, a mobile
robot is tasked with estimating human intentions using their motion
trajectories, declarative contextual knowledge, and human-robot interaction
(dialog-based and motion-based). Results suggest that, in efficiency and
accuracy, our framework performs better than its no-learning and no-reasoning
counterparts in office environment.Comment: In proceedings of 34th AAAI conference on Artificial Intelligence,
202
- …