6 research outputs found
Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion
Learning joint embedding space for various modalities is of vital importance
for multimodal fusion. Mainstream modality fusion approaches fail to achieve
this goal, leaving a modality gap which heavily affects cross-modal fusion. In
this paper, we propose a novel adversarial encoder-decoder-classifier framework
to learn a modality-invariant embedding space. Since the distributions of
various modalities vary in nature, to reduce the modality gap, we translate the
distributions of source modalities into that of target modality via their
respective encoders using adversarial training. Furthermore, we exert
additional constraints on embedding space by introducing reconstruction loss
and classification loss. Then we fuse the encoded representations using
hierarchical graph neural network which explicitly explores unimodal, bimodal
and trimodal interactions in multi-stage. Our method achieves state-of-the-art
performance on multiple datasets. Visualization of the learned embeddings
suggests that the joint embedding space learned by our method is
discriminative. code is available at:
\url{https://github.com/TmacMai/ARGF_multimodal_fusion}Comment: Accepted by AAAI-2020; code is available at:
https://github.com/TmacMai/ARGF_multimodal_fusio