594 research outputs found
Conditional Image-Text Embedding Networks
This paper presents an approach for grounding phrases in images which jointly
learns multiple text-conditioned embeddings in a single end-to-end model. In
order to differentiate text phrases into semantically distinct subspaces, we
propose a concept weight branch that automatically assigns phrases to
embeddings, whereas prior works predefine such assignments. Our proposed
solution simplifies the representation requirements for individual embeddings
and allows the underrepresented concepts to take advantage of the shared
representations before feeding them into concept-specific layers. Comprehensive
experiments verify the effectiveness of our approach across three phrase
grounding datasets, Flickr30K Entities, ReferIt Game, and Visual Genome, where
we obtain a (resp.) 4%, 3%, and 4% improvement in grounding performance over a
strong region-phrase embedding baseline.Comment: ECCV 2018 accepted pape
Object Referring in Videos with Language and Human Gaze
We investigate the problem of object referring (OR) i.e. to localize a target
object in a visual scene coming with a language description. Humans perceive
the world more as continued video snippets than as static images, and describe
objects not only by their appearance, but also by their spatio-temporal context
and motion features. Humans also gaze at the object when they issue a referring
expression. Existing works for OR mostly focus on static images only, which
fall short in providing many such cues. This paper addresses OR in videos with
language and human gaze. To that end, we present a new video dataset for OR,
with 30, 000 objects over 5, 000 stereo video sequences annotated for their
descriptions and gaze. We further propose a novel network model for OR in
videos, by integrating appearance, motion, gaze, and spatio-temporal context
into one network. Experimental results show that our method effectively
utilizes motion cues, human gaze, and spatio-temporal context. Our method
outperforms previousOR methods. For dataset and code, please refer
https://people.ee.ethz.ch/~arunv/ORGaze.html.Comment: Accepted to CVPR 2018, 10 pages, 6 figure
Learning Cross-modal Context Graph for Visual Grounding
Visual grounding is a ubiquitous building block in many vision-language tasks
and yet remains challenging due to large variations in visual and linguistic
features of grounding entities, strong context effect and the resulting
semantic ambiguities. Prior works typically focus on learning representations
of individual phrases with limited context information. To address their
limitations, this paper proposes a language-guided graph representation to
capture the global context of grounding entities and their relations, and
develop a cross-modal graph matching strategy for the multiple-phrase visual
grounding task. In particular, we introduce a modular graph neural network to
compute context-aware representations of phrases and object proposals
respectively via message propagation, followed by a graph-based matching module
to generate globally consistent localization of grounding phrases. We train the
entire graph neural network jointly in a two-stage strategy and evaluate it on
the Flickr30K Entities benchmark. Extensive experiments show that our method
outperforms the prior state of the arts by a sizable margin, evidencing the
efficacy of our grounding framework. Code is available at
"https://github.com/youngfly11/LCMCG-PyTorch".Comment: AAAI-202
- …