60,559 research outputs found
Audio-Visual Person Verification based on Recursive Fusion of Joint Cross-Attention
Person or identity verification has been recently gaining a lot of attention
using audio-visual fusion as faces and voices share close associations with
each other. Conventional approaches based on audio-visual fusion rely on
score-level or early feature-level fusion techniques. Though existing
approaches showed improvement over unimodal systems, the potential of
audio-visual fusion for person verification is not fully exploited. In this
paper, we have investigated the prospect of effectively capturing both the
intra- and inter-modal relationships across audio and visual modalities, which
can play a crucial role in significantly improving the fusion performance over
unimodal systems. In particular, we introduce a recursive fusion of a joint
cross-attentional model, where a joint audio-visual feature representation is
employed in the cross-attention framework in a recursive fashion to
progressively refine the feature representations that can efficiently capture
the intra-and inter-modal relationships. To further enhance the audio-visual
feature representations, we have also explored BLSTMs to improve the temporal
modeling of audio-visual feature representations. Extensive experiments are
conducted on the Voxceleb1 dataset to evaluate the proposed model. Results
indicate that the proposed model shows promising improvement in fusion
performance by adeptly capturing the intra-and inter-modal relationships across
audio and visual modalities.Comment: Accepted to FG202
MIS-AVoiDD: Modality Invariant and Specific Representation for Audio-Visual Deepfake Detection
Deepfakes are synthetic media generated using deep generative algorithms and
have posed a severe societal and political threat. Apart from facial
manipulation and synthetic voice, recently, a novel kind of deepfakes has
emerged with either audio or visual modalities manipulated. In this regard, a
new generation of multimodal audio-visual deepfake detectors is being
investigated to collectively focus on audio and visual data for multimodal
manipulation detection. Existing multimodal (audio-visual) deepfake detectors
are often based on the fusion of the audio and visual streams from the video.
Existing studies suggest that these multimodal detectors often obtain
equivalent performances with unimodal audio and visual deepfake detectors. We
conjecture that the heterogeneous nature of the audio and visual signals
creates distributional modality gaps and poses a significant challenge to
effective fusion and efficient performance. In this paper, we tackle the
problem at the representation level to aid the fusion of audio and visual
streams for multimodal deepfake detection. Specifically, we propose the joint
use of modality (audio and visual) invariant and specific representations. This
ensures that the common patterns and patterns specific to each modality
representing pristine or fake content are preserved and fused for multimodal
deepfake manipulation detection. Our experimental results on FakeAVCeleb and
KoDF audio-visual deepfake datasets suggest the enhanced accuracy of our
proposed method over SOTA unimodal and multimodal audio-visual deepfake
detectors by % and %, respectively. Thus, obtaining
state-of-the-art performance.Comment: 8 pages, 3 figure
Anti-social behavior detection in audio-visual surveillance systems
In this paper we propose a general purpose framework for
detection of unusual events. The proposed system is based on the unsupervised method for unusual scene detection in web{cam images that was introduced in [1]. We extend their algorithm to accommodate data from different modalities and introduce the concept of time-space blocks. In addition, we evaluate early and late fusion techniques for our audio-visual data features. The experimental results on 192 hours of data show that data fusion of audio and video outperforms using a single modality
Deep Multimodal Learning for Audio-Visual Speech Recognition
In this paper, we present methods in deep multimodal learning for fusing
speech and visual modalities for Audio-Visual Automatic Speech Recognition
(AV-ASR). First, we study an approach where uni-modal deep networks are trained
separately and their final hidden layers fused to obtain a joint feature space
in which another deep network is built. While the audio network alone achieves
a phone error rate (PER) of under clean condition on the IBM large
vocabulary audio-visual studio dataset, this fusion model achieves a PER of
demonstrating the tremendous value of the visual channel in phone
classification even in audio with high signal to noise ratio. Second, we
present a new deep network architecture that uses a bilinear softmax layer to
account for class specific correlations between modalities. We show that
combining the posteriors from the bilinear networks with those from the fused
model mentioned above results in a further significant phone error rate
reduction, yielding a final PER of .Comment: ICASSP 201
SCANet: A Self- and Cross-Attention Network for Audio-Visual Speech Separation
The integration of different modalities, such as audio and visual
information, plays a crucial role in human perception of the surrounding
environment. Recent research has made significant progress in designing fusion
modules for audio-visual speech separation. However, they predominantly focus
on multi-modal fusion architectures situated either at the top or bottom
positions, rather than comprehensively considering multi-modal fusion at
various hierarchical positions within the network. In this paper, we propose a
novel model called self- and cross-attention network (SCANet), which leverages
the attention mechanism for efficient audio-visual feature fusion. SCANet
consists of two types of attention blocks: self-attention (SA) and
cross-attention (CA) blocks, where the CA blocks are distributed at the top
(TCA), middle (MCA) and bottom (BCA) of SCANet. These blocks maintain the
ability to learn modality-specific features and enable the extraction of
different semantics from audio-visual features. Comprehensive experiments on
three standard audio-visual separation benchmarks (LRS2, LRS3, and VoxCeleb2)
demonstrate the effectiveness of SCANet, outperforming existing
state-of-the-art (SOTA) methods while maintaining comparable inference time.Comment: 14 pages, 3 figure
- …