6,744 research outputs found
Learning Cross-modal Context Graph for Visual Grounding
Visual grounding is a ubiquitous building block in many vision-language tasks
and yet remains challenging due to large variations in visual and linguistic
features of grounding entities, strong context effect and the resulting
semantic ambiguities. Prior works typically focus on learning representations
of individual phrases with limited context information. To address their
limitations, this paper proposes a language-guided graph representation to
capture the global context of grounding entities and their relations, and
develop a cross-modal graph matching strategy for the multiple-phrase visual
grounding task. In particular, we introduce a modular graph neural network to
compute context-aware representations of phrases and object proposals
respectively via message propagation, followed by a graph-based matching module
to generate globally consistent localization of grounding phrases. We train the
entire graph neural network jointly in a two-stage strategy and evaluate it on
the Flickr30K Entities benchmark. Extensive experiments show that our method
outperforms the prior state of the arts by a sizable margin, evidencing the
efficacy of our grounding framework. Code is available at
"https://github.com/youngfly11/LCMCG-PyTorch".Comment: AAAI-202
Multimodal Grounding for Language Processing
This survey discusses how recent developments in multimodal processing
facilitate conceptual grounding of language. We categorize the information flow
in multimodal processing with respect to cognitive models of human information
processing and analyze different methods for combining multimodal
representations. Based on this methodological inventory, we discuss the benefit
of multimodal grounding for a variety of language processing tasks and the
challenges that arise. We particularly focus on multimodal grounding of verbs
which play a crucial role for the compositional power of language.Comment: The paper has been published in the Proceedings of the 27 Conference
of Computational Linguistics. Please refer to this version for citations:
https://www.aclweb.org/anthology/papers/C/C18/C18-1197
Textbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension
In this work, we introduce a novel algorithm for solving the textbook
question answering (TQA) task which describes more realistic QA problems
compared to other recent tasks. We mainly focus on two related issues with
analysis of the TQA dataset. First, solving the TQA problems requires to
comprehend multi-modal contexts in complicated input data. To tackle this issue
of extracting knowledge features from long text lessons and merging them with
visual features, we establish a context graph from texts and images, and
propose a new module f-GCN based on graph convolutional networks (GCN). Second,
scientific terms are not spread over the chapters and subjects are split in the
TQA dataset. To overcome this so called "out-of-domain" issue, before learning
QA problems, we introduce a novel self-supervised open-set learning process
without any annotations. The experimental results show that our model
significantly outperforms prior state-of-the-art methods. Moreover, ablation
studies validate that both methods of incorporating f-GCN for extracting
knowledge from multi-modal contexts and our newly proposed self-supervised
learning process are effective for TQA problems.Comment: ACL2019 Camera-read
A Survey on Interpretable Cross-modal Reasoning
In recent years, cross-modal reasoning (CMR), the process of understanding
and reasoning across different modalities, has emerged as a pivotal area with
applications spanning from multimedia analysis to healthcare diagnostics. As
the deployment of AI systems becomes more ubiquitous, the demand for
transparency and comprehensibility in these systems' decision-making processes
has intensified. This survey delves into the realm of interpretable cross-modal
reasoning (I-CMR), where the objective is not only to achieve high predictive
performance but also to provide human-understandable explanations for the
results. This survey presents a comprehensive overview of the typical methods
with a three-level taxonomy for I-CMR. Furthermore, this survey reviews the
existing CMR datasets with annotations for explanations. Finally, this survey
summarizes the challenges for I-CMR and discusses potential future directions.
In conclusion, this survey aims to catalyze the progress of this emerging
research area by providing researchers with a panoramic and comprehensive
perspective, illuminating the state of the art and discerning the
opportunities
Harvesting Information from Captions for Weakly Supervised Semantic Segmentation
Since acquiring pixel-wise annotations for training convolutional neural
networks for semantic image segmentation is time-consuming, weakly supervised
approaches that only require class tags have been proposed. In this work, we
propose another form of supervision, namely image captions as they can be found
on the Internet. These captions have two advantages. They do not require
additional curation as it is the case for the clean class tags used by current
weakly supervised approaches and they provide textual context for the classes
present in an image. To leverage such textual context, we deploy a multi-modal
network that learns a joint embedding of the visual representation of the image
and the textual representation of the caption. The network estimates text
activation maps (TAMs) for class names as well as compound concepts, i.e.
combinations of nouns and their attributes. The TAMs of compound concepts
describing classes of interest substantially improve the quality of the
estimated class activation maps which are then used to train a network for
semantic segmentation. We evaluate our method on the COCO dataset where it
achieves state of the art results for weakly supervised image segmentation
- …