2,983 research outputs found
Parallel Attention: A Unified Framework for Visual Object Discovery through Dialogs and Queries
Recognising objects according to a pre-defined fixed set of class labels has
been well studied in the Computer Vision. There are a great many practical
applications where the subjects that may be of interest are not known
beforehand, or so easily delineated, however. In many of these cases natural
language dialog is a natural way to specify the subject of interest, and the
task achieving this capability (a.k.a, Referring Expression Comprehension) has
recently attracted attention. To this end we propose a unified framework, the
ParalleL AttentioN (PLAN) network, to discover the object in an image that is
being referred to in variable length natural expression descriptions, from
short phrases query to long multi-round dialogs. The PLAN network has two
attention mechanisms that relate parts of the expressions to both the global
visual content and also directly to object candidates. Furthermore, the
attention mechanisms are recurrent, making the referring process visualizable
and explainable. The attended information from these dual sources are combined
to reason about the referred object. These two attention mechanisms can be
trained in parallel and we find the combined system outperforms the
state-of-art on several benchmarked datasets with different length language
input, such as RefCOCO, RefCOCO+ and GuessWhat?!.Comment: 11 page
DMRM: A Dual-channel Multi-hop Reasoning Model for Visual Dialog
Visual Dialog is a vision-language task that requires an AI agent to engage
in a conversation with humans grounded in an image. It remains a challenging
task since it requires the agent to fully understand a given question before
making an appropriate response not only from the textual dialog history, but
also from the visually-grounded information. While previous models typically
leverage single-hop reasoning or single-channel reasoning to deal with this
complex multimodal reasoning task, which is intuitively insufficient. In this
paper, we thus propose a novel and more powerful Dual-channel Multi-hop
Reasoning Model for Visual Dialog, named DMRM. DMRM synchronously captures
information from the dialog history and the image to enrich the semantic
representation of the question by exploiting dual-channel reasoning.
Specifically, DMRM maintains a dual channel to obtain the question- and
history-aware image features and the question- and image-aware dialog history
features by a mulit-hop reasoning process in each channel. Additionally, we
also design an effective multimodal attention to further enhance the decoder to
generate more accurate responses. Experimental results on the VisDial v0.9 and
v1.0 datasets demonstrate that the proposed model is effective and outperforms
compared models by a significant margin.Comment: Accepted at AAAI 202
- …