513 research outputs found
Referring Expression Comprehension: A Survey of Methods and Datasets
Referring expression comprehension (REC) aims to localize a target object in
an image described by a referring expression phrased in natural language.
Different from the object detection task that queried object labels have been
pre-defined, the REC problem only can observe the queries during the test. It
thus more challenging than a conventional computer vision problem. This task
has attracted a lot of attention from both computer vision and natural language
processing community, and several lines of work have been proposed, from
CNN-RNN model, modular network to complex graph-based model. In this survey, we
first examine the state of the art by comparing modern approaches to the
problem. We classify methods by their mechanism to encode the visual and
textual modalities. In particular, we examine the common approach of joint
embedding images and expressions to a common feature space. We also discuss
modular architectures and graph-based models that interface with structured
graph representation. In the second part of this survey, we review the datasets
available for training and evaluating REC systems. We then group results
according to the datasets, backbone models, settings so that they can be fairly
compared. Finally, we discuss promising future directions for the field, in
particular the compositional referring expression comprehension that requires
longer reasoning chain to address.Comment: Accepted to IEEE TM
What Goes beyond Multi-modal Fusion in One-stage Referring Expression Comprehension: An Empirical Study
Most of the existing work in one-stage referring expression comprehension
(REC) mainly focuses on multi-modal fusion and reasoning, while the influence
of other factors in this task lacks in-depth exploration. To fill this gap, we
conduct an empirical study in this paper. Concretely, we first build a very
simple REC network called SimREC, and ablate 42 candidate designs/settings,
which covers the entire process of one-stage REC from network design to model
training. Afterwards, we conduct over 100 experimental trials on three
benchmark datasets of REC. The extensive experimental results not only show the
key factors that affect REC performance in addition to multi-modal fusion,
e.g., multi-scale features and data augmentation, but also yield some findings
that run counter to conventional understanding. For example, as a vision and
language (V&L) task, REC does is less impacted by language prior. In addition,
with a proper combination of these findings, we can improve the performance of
SimREC by a large margin, e.g., +27.12% on RefCOCO+, which outperforms all
existing REC methods. But the most encouraging finding is that with much less
training overhead and parameters, SimREC can still achieve better performance
than a set of large-scale pre-trained models, e.g., UNITER and VILLA,
portraying the special role of REC in existing V&L research
AttnGrounder: Talking to Cars with Attention
We propose Attention Grounder (AttnGrounder), a single-stage end-to-end
trainable model for the task of visual grounding. Visual grounding aims to
localize a specific object in an image based on a given natural language text
query. Unlike previous methods that use the same text representation for every
image region, we use a visual-text attention module that relates each word in
the given query with every region in the corresponding image for constructing a
region dependent text representation. Furthermore, for improving the
localization ability of our model, we use our visual-text attention module to
generate an attention mask around the referred object. The attention mask is
trained as an auxiliary task using a rectangular mask generated with the
provided ground-truth coordinates. We evaluate AttnGrounder on the Talk2Car
dataset and show an improvement of 3.26% over the existing methods
RSVG: Exploring Data and Models for Visual Grounding on Remote Sensing Data
In this paper, we introduce the task of visual grounding for remote sensing
data (RSVG). RSVG aims to localize the referred objects in remote sensing (RS)
images with the guidance of natural language. To retrieve rich information from
RS imagery using natural language, many research tasks, like RS image visual
question answering, RS image captioning, and RS image-text retrieval have been
investigated a lot. However, the object-level visual grounding on RS images is
still under-explored. Thus, in this work, we propose to construct the dataset
and explore deep learning models for the RSVG task. Specifically, our
contributions can be summarized as follows. 1) We build the new large-scale
benchmark dataset of RSVG, termed RSVGD, to fully advance the research of RSVG.
This new dataset includes image/expression/box triplets for training and
evaluating visual grounding models. 2) We benchmark extensive state-of-the-art
(SOTA) natural image visual grounding methods on the constructed RSVGD dataset,
and some insightful analyses are provided based on the results. 3) A novel
transformer-based Multi-Level Cross-Modal feature learning (MLCM) module is
proposed. Remotely-sensed images are usually with large scale variations and
cluttered backgrounds. To deal with the scale-variation problem, the MLCM
module takes advantage of multi-scale visual features and multi-granularity
textual embeddings to learn more discriminative representations. To cope with
the cluttered background problem, MLCM adaptively filters irrelevant noise and
enhances salient features. In this way, our proposed model can incorporate more
effective multi-level and multi-modal features to boost performance.
Furthermore, this work also provides useful insights for developing better RSVG
models. The dataset and code will be publicly available at
https://github.com/ZhanYang-nwpu/RSVG-pytorch.Comment: 12 pages, 10 figure
Ref-NMS: Breaking Proposal Bottlenecks in Two-Stage Referring Expression Grounding
The prevailing framework for solving referring expression grounding is based
on a two-stage process: 1) detecting proposals with an object detector and 2)
grounding the referent to one of the proposals. Existing two-stage solutions
mostly focus on the grounding step, which aims to align the expressions with
the proposals. In this paper, we argue that these methods overlook an obvious
mismatch between the roles of proposals in the two stages: they generate
proposals solely based on the detection confidence (i.e., expression-agnostic),
hoping that the proposals contain all right instances in the expression (i.e.,
expression-aware). Due to this mismatch, current two-stage methods suffer from
a severe performance drop between detected and ground-truth proposals. To this
end, we propose Ref-NMS, which is the first method to yield expression-aware
proposals at the first stage. Ref-NMS regards all nouns in the expression as
critical objects, and introduces a lightweight module to predict a score for
aligning each box with a critical object. These scores can guide the NMS
operation to filter out the boxes irrelevant to the expression, increasing the
recall of critical objects, resulting in a significantly improved grounding
performance. Since Ref- NMS is agnostic to the grounding step, it can be easily
integrated into any state-of-the-art two-stage method. Extensive ablation
studies on several backbones, benchmarks, and tasks consistently demonstrate
the superiority of Ref-NMS. Codes are available at:
https://github.com/ChopinSharp/ref-nms.Comment: Appear in AAAI 2021, Codes are available at:
https://github.com/ChopinSharp/ref-nm
Advancing Visual Grounding with Scene Knowledge: Benchmark and Method
Visual grounding (VG) aims to establish fine-grained alignment between vision
and language. Ideally, it can be a testbed for vision-and-language models to
evaluate their understanding of the images and texts and their reasoning
abilities over their joint space. However, most existing VG datasets are
constructed using simple description texts, which do not require sufficient
reasoning over the images and texts. This has been demonstrated in a recent
study~\cite{luo2022goes}, where a simple LSTM-based text encoder without
pretraining can achieve state-of-the-art performance on mainstream VG datasets.
Therefore, in this paper, we propose a novel benchmark of \underline{S}cene
\underline{K}nowledge-guided \underline{V}isual \underline{G}rounding (SK-VG),
where the image content and referring expressions are not sufficient to ground
the target objects, forcing the models to have a reasoning ability on the
long-form scene knowledge. To perform this task, we propose two approaches to
accept the triple-type input, where the former embeds knowledge into the image
features before the image-query interaction; the latter leverages linguistic
structure to assist in computing the image-text matching. We conduct extensive
experiments to analyze the above methods and show that the proposed approaches
achieve promising results but still leave room for improvement, including
performance and interpretability. The dataset and code are available at
\url{https://github.com/zhjohnchan/SK-VG}.Comment: Computer Vision and Natural Language Processing. 21 pages, 14
figures. CVPR-202
- …