222 research outputs found
Hierachical Delta-Attention Method for Multimodal Fusion
In vision and linguistics; the main input modalities are facial expressions,
speech patterns, and the words uttered. The issue with analysis of any one mode
of expression (Visual, Verbal or Vocal) is that lot of contextual information
can get lost. This asks researchers to inspect multiple modalities to get a
thorough understanding of the cross-modal dependencies and temporal context of
the situation to analyze the expression. This work attempts at preserving the
long-range dependencies within and across different modalities, which would be
bottle-necked by the use of recurrent networks and adds the concept of
delta-attention to focus on local differences per modality to capture the
idiosyncrasy of different people. We explore a cross-attention fusion technique
to get the global view of the emotion expressed through these
delta-self-attended modalities, in order to fuse all the local nuances and
global context together. The addition of attention is new to the multi-modal
fusion field and currently being scrutinized for on what stage the attention
mechanism should be used, this work achieves competitive accuracy for overall
and per-class classification which is close to the current state-of-the-art
with almost half number of parameters
Text-oriented Modality Reinforcement Network for Multimodal Sentiment Analysis from Unaligned Multimodal Sequences
Multimodal Sentiment Analysis (MSA) aims to mine sentiment information from
text, visual, and acoustic modalities. Previous works have focused on
representation learning and feature fusion strategies. However, most of these
efforts ignored the disparity in the semantic richness of different modalities
and treated each modality in the same manner. That may lead to strong
modalities being neglected and weak modalities being overvalued. Motivated by
these observations, we propose a Text-oriented Modality Reinforcement Network
(TMRN), which focuses on the dominance of the text modality in MSA. More
specifically, we design a Text-Centered Cross-modal Attention (TCCA) module to
make full interaction for text/acoustic and text/visual pairs, and a Text-Gated
Self-Attention (TGSA) module to guide the self-reinforcement of the other two
modalities. Furthermore, we present an adaptive fusion mechanism to decide the
proportion of different modalities involved in the fusion process. Finally, we
combine the feature matrices into vectors to get the final representation for
the downstream tasks. Experimental results show that our TMRN outperforms the
state-of-the-art methods on two MSA benchmarks.Comment: Accepted by CICAI 2023 (Finalist of Best Student Paper Award
Unimodal Training-Multimodal Prediction: Cross-modal Federated Learning with Hierarchical Aggregation
Multimodal learning has seen great success mining data features from multiple
modalities with remarkable model performance improvement. Meanwhile, federated
learning (FL) addresses the data sharing problem, enabling privacy-preserved
collaborative training to provide sufficient precious data. Great potential,
therefore, arises with the confluence of them, known as multimodal federated
learning. However, limitation lies in the predominant approaches as they often
assume that each local dataset records samples from all modalities. In this
paper, we aim to bridge this gap by proposing an Unimodal Training - Multimodal
Prediction (UTMP) framework under the context of multimodal federated learning.
We design HA-Fedformer, a novel transformer-based model that empowers unimodal
training with only a unimodal dataset at the client and multimodal testing by
aggregating multiple clients' knowledge for better accuracy. The key advantages
are twofold. Firstly, to alleviate the impact of data non-IID, we develop an
uncertainty-aware aggregation method for the local encoders with layer-wise
Markov Chain Monte Carlo sampling. Secondly, to overcome the challenge of
unaligned language sequence, we implement a cross-modal decoder aggregation to
capture the hidden signal correlation between decoders trained by data from
different modalities. Our experiments on popular sentiment analysis benchmarks,
CMU-MOSI and CMU-MOSEI, demonstrate that HA-Fedformer significantly outperforms
state-of-the-art multimodal models under the UTMP federated learning
frameworks, with 15%-20% improvement on most attributes.Comment: 10 pages,5 figure
Mirasol3B: A Multimodal Autoregressive model for time-aligned and contextual modalities
One of the main challenges of multimodal learning is the need to combine
heterogeneous modalities (e.g., video, audio, text). For example, video and
audio are obtained at much higher rates than text and are roughly aligned in
time. They are often not synchronized with text, which comes as a global
context, e.g., a title, or a description. Furthermore, video and audio inputs
are of much larger volumes, and grow as the video length increases, which
naturally requires more compute dedicated to these modalities and makes
modeling of long-range dependencies harder.
We here decouple the multimodal modeling, dividing it into separate, focused
autoregressive models, processing the inputs according to the characteristics
of the modalities. We propose a multimodal model, called Mirasol3B, consisting
of an autoregressive component for the time-synchronized modalities (audio and
video), and an autoregressive component for the context modalities which are
not necessarily aligned in time but are still sequential. To address the
long-sequences of the video-audio inputs, we propose to further partition the
video and audio sequences in consecutive snippets and autoregressively process
their representations. To that end, we propose a Combiner mechanism, which
models the audio-video information jointly within a timeframe. The Combiner
learns to extract audio and video features from raw spatio-temporal signals,
and then learns to fuse these features producing compact but expressive
representations per snippet.
Our approach achieves the state-of-the-art on well established multimodal
benchmarks, outperforming much larger models. It effectively addresses the high
computational demand of media inputs by both learning compact representations,
controlling the sequence length of the audio-video feature representations, and
modeling their dependencies in time
- …