Multimodal fusion integrates the complementary information present in
multiple modalities and has gained much attention recently. Most existing
fusion approaches either learn a fixed fusion strategy during training and
inference, or are only capable of fusing the information to a certain extent.
Such solutions may fail to fully capture the dynamics of interactions across
modalities especially when there are complex intra- and inter-modality
correlations to be considered for informative multimodal fusion. In this paper,
we propose a novel deep equilibrium (DEQ) method towards multimodal fusion via
seeking a fixed point of the dynamic multimodal fusion process and modeling the
feature correlations in an adaptive and recursive manner. This new way encodes
the rich information within and across modalities thoroughly from low level to
high level for efficacious downstream multimodal learning and is readily
pluggable to various multimodal frameworks. Extensive experiments on BRCA,
MM-IMDB, CMU-MOSI, SUN RGB-D, and VQA-v2 demonstrate the superiority of our DEQ
fusion. More remarkably, DEQ fusion consistently achieves state-of-the-art
performance on multiple multimodal benchmarks. The code will be released