As multimodal learning finds applications in a wide variety of high-stakes
societal tasks, investigating their robustness becomes important. Existing work
has focused on understanding the robustness of vision-and-language models to
imperceptible variations on benchmark tasks. In this work, we investigate the
robustness of multimodal classifiers to cross-modal dilutions - a plausible
variation. We develop a model that, given a multimodal (image + text) input,
generates additional dilution text that (a) maintains relevance and topical
coherence with the image and existing text, and (b) when added to the original
text, leads to misclassification of the multimodal input. Via experiments on
Crisis Humanitarianism and Sentiment Detection tasks, we find that the
performance of task-specific fusion-based multimodal classifiers drops by 23.3%
and 22.5%, respectively, in the presence of dilutions generated by our model.
Metric-based comparisons with several baselines and human evaluations indicate
that our dilutions show higher relevance and topical coherence, while
simultaneously being more effective at demonstrating the brittleness of the
multimodal classifiers. Our work aims to highlight and encourage further
research on the robustness of deep multimodal models to realistic variations,
especially in human-facing societal applications. The code and other resources
are available at https://claws-lab.github.io/multimodal-robustness/.Comment: Accepted at the 2022 Conference on Empirical Methods in Natural
Language Processing (EMNLP); Full Paper (Oral