Existing visual question answering methods tend to capture the cross-modal
spurious correlations and fail to discover the true causal mechanism that
facilitates reasoning truthfully based on the dominant visual evidence and the
question intention. Additionally, the existing methods usually ignore the
cross-modal event-level understanding that requires to jointly model event
temporality, causality, and dynamics. In this work, we focus on event-level
visual question answering from a new perspective, i.e., cross-modal causal
relational reasoning, by introducing causal intervention methods to discover
the true causal structures for visual and linguistic modalities. Specifically,
we propose a novel event-level visual question answering framework named
Cross-Modal Causal RelatIonal Reasoning (CMCIR), to achieve robust
causality-aware visual-linguistic question answering. To discover cross-modal
causal structures, the Causality-aware Visual-Linguistic Reasoning (CVLR)
module is proposed to collaboratively disentangle the visual and linguistic
spurious correlations via front-door and back-door causal interventions. To
model the fine-grained interactions between linguistic semantics and
spatial-temporal representations, we build a Spatial-Temporal Transformer (STT)
that creates multi-modal co-occurrence interactions between visual and
linguistic content. To adaptively fuse the causality-ware visual and linguistic
features, we introduce a Visual-Linguistic Feature Fusion (VLFF) module that
leverages the hierarchical linguistic semantic relations as the guidance to
learn the global semantic-aware visual-linguistic representations adaptively.
Extensive experiments on four event-level datasets demonstrate the superiority
of our CMCIR in discovering visual-linguistic causal structures and achieving
robust event-level visual question answering.Comment: 17 pages, 9 figures. This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessible. The datasets, code and models
are available at https://github.com/YangLiu9208/CMCI