3 research outputs found
Question-Conditioned Counterfactual Image Generation for VQA
While Visual Question Answering (VQA) models continue to push the
state-of-the-art forward, they largely remain black-boxes - failing to provide
insight into how or why an answer is generated. In this ongoing work, we
propose addressing this shortcoming by learning to generate counterfactual
images for a VQA model - i.e. given a question-image pair, we wish to generate
a new image such that i) the VQA model outputs a different answer, ii) the new
image is minimally different from the original, and iii) the new image is
realistic. Our hope is that providing such counterfactual examples allows users
to investigate and understand the VQA model's internal mechanisms.Comment: Accepted by the VQA Workshop at CVPR 201
Counterfactual Samples Synthesizing for Robust Visual Question Answering
Despite Visual Question Answering (VQA) has realized impressive progress over
the last few years, today's VQA models tend to capture superficial linguistic
correlations in the train set and fail to generalize to the test set with
different QA distributions. To reduce the language biases, several recent works
introduce an auxiliary question-only model to regularize the training of
targeted VQA model, and achieve dominating performance on VQA-CP. However,
since the complexity of design, current methods are unable to equip the
ensemble-based models with two indispensable characteristics of an ideal VQA
model: 1) visual-explainable: the model should rely on the right visual regions
when making decisions. 2) question-sensitive: the model should be sensitive to
the linguistic variations in question. To this end, we propose a model-agnostic
Counterfactual Samples Synthesizing (CSS) training scheme. The CSS generates
numerous counterfactual training samples by masking critical objects in images
or words in questions, and assigning different ground-truth answers. After
training with the complementary samples (ie, the original and generated
samples), the VQA models are forced to focus on all critical objects and words,
which significantly improves both visual-explainable and question-sensitive
abilities. In return, the performance of these models is further boosted.
Extensive ablations have shown the effectiveness of CSS. Particularly, by
building on top of the model LMH, we achieve a record-breaking performance of
58.95% on VQA-CP v2, with 6.5% gains.Comment: Appear in CVPR 2020; Codes in https://github.com/yanxinzju/CSS-VQ
Multimodal Research in Vision and Language: A Review of Current and Emerging Trends
Deep Learning and its applications have cascaded impactful research and
development with a diverse range of modalities present in the real-world data.
More recently, this has enhanced research interests in the intersection of the
Vision and Language arena with its numerous applications and fast-paced growth.
In this paper, we present a detailed overview of the latest trends in research
pertaining to visual and language modalities. We look at its applications in
their task formulations and how to solve various problems related to semantic
perception and content generation. We also address task-specific trends, along
with their evaluation strategies and upcoming challenges. Moreover, we shed
some light on multi-disciplinary patterns and insights that have emerged in the
recent past, directing this field towards more modular and transparent
intelligent systems. This survey identifies key trends gravitating recent
literature in VisLang research and attempts to unearth directions that the
field is heading towards