204 research outputs found
Learning Cross-modal Context Graph for Visual Grounding
Visual grounding is a ubiquitous building block in many vision-language tasks
and yet remains challenging due to large variations in visual and linguistic
features of grounding entities, strong context effect and the resulting
semantic ambiguities. Prior works typically focus on learning representations
of individual phrases with limited context information. To address their
limitations, this paper proposes a language-guided graph representation to
capture the global context of grounding entities and their relations, and
develop a cross-modal graph matching strategy for the multiple-phrase visual
grounding task. In particular, we introduce a modular graph neural network to
compute context-aware representations of phrases and object proposals
respectively via message propagation, followed by a graph-based matching module
to generate globally consistent localization of grounding phrases. We train the
entire graph neural network jointly in a two-stage strategy and evaluate it on
the Flickr30K Entities benchmark. Extensive experiments show that our method
outperforms the prior state of the arts by a sizable margin, evidencing the
efficacy of our grounding framework. Code is available at
"https://github.com/youngfly11/LCMCG-PyTorch".Comment: AAAI-202
Referring Expression Comprehension: A Survey of Methods and Datasets
Referring expression comprehension (REC) aims to localize a target object in
an image described by a referring expression phrased in natural language.
Different from the object detection task that queried object labels have been
pre-defined, the REC problem only can observe the queries during the test. It
thus more challenging than a conventional computer vision problem. This task
has attracted a lot of attention from both computer vision and natural language
processing community, and several lines of work have been proposed, from
CNN-RNN model, modular network to complex graph-based model. In this survey, we
first examine the state of the art by comparing modern approaches to the
problem. We classify methods by their mechanism to encode the visual and
textual modalities. In particular, we examine the common approach of joint
embedding images and expressions to a common feature space. We also discuss
modular architectures and graph-based models that interface with structured
graph representation. In the second part of this survey, we review the datasets
available for training and evaluating REC systems. We then group results
according to the datasets, backbone models, settings so that they can be fairly
compared. Finally, we discuss promising future directions for the field, in
particular the compositional referring expression comprehension that requires
longer reasoning chain to address.Comment: Accepted to IEEE TM
- …