37 research outputs found
Hierarchical Cross-modal Transformer for RGB-D Salient Object Detection
Most of existing RGB-D salient object detection (SOD) methods follow the
CNN-based paradigm, which is unable to model long-range dependencies across
space and modalities due to the natural locality of CNNs. Here we propose the
Hierarchical Cross-modal Transformer (HCT), a new multi-modal transformer, to
tackle this problem. Unlike previous multi-modal transformers that directly
connecting all patches from two modalities, we explore the cross-modal
complementarity hierarchically to respect the modality gap and spatial
discrepancy in unaligned regions. Specifically, we propose to use intra-modal
self-attention to explore complementary global contexts, and measure
spatial-aligned inter-modal attention locally to capture cross-modal
correlations. In addition, we present a Feature Pyramid module for Transformer
(FPT) to boost informative cross-scale integration as well as a
consistency-complementarity module to disentangle the multi-modal integration
path and improve the fusion adaptivity. Comprehensive experiments on a large
variety of public datasets verify the efficacy of our designs and the
consistent improvement over state-of-the-art models.Comment: 10 pages, 10 figure
Interpretation on Multi-modal Visual Fusion
In this paper, we present an analytical framework and a novel metric to shed
light on the interpretation of the multimodal vision community. Our approach
involves measuring the proposed semantic variance and feature similarity across
modalities and levels, and conducting semantic and quantitative analyses
through comprehensive experiments. Specifically, we investigate the consistency
and speciality of representations across modalities, evolution rules within
each modality, and the collaboration logic used when optimizing a
multi-modality model. Our studies reveal several important findings, such as
the discrepancy in cross-modal features and the hybrid multi-modal cooperation
rule, which highlights consistency and speciality simultaneously for
complementary inference. Through our dissection and findings on multi-modal
fusion, we facilitate a rethinking of the reasonability and necessity of
popular multi-modal vision fusion strategies. Furthermore, our work lays the
foundation for designing a trustworthy and universal multi-modal fusion model
for a variety of tasks in the future.Comment: This version was under review since 2023/3/