44 research outputs found
Key-Sparse Transformer with Cascaded Cross-Attention Block for Multimodal Speech Emotion Recognition
Speech emotion recognition is a challenging and important research topic that
plays a critical role in human-computer interaction. Multimodal inputs can
improve the performance as more emotional information is used for recognition.
However, existing studies learnt all the information in the sample while only a
small portion of it is about emotion. Moreover, under the multimodal framework,
the interaction between different modalities is shallow and insufficient. In
this paper, a keysparse Transformer is proposed for efficient SER by only
focusing on emotion related information. Furthermore, a cascaded
cross-attention block, which is specially designed for multimodal framework, is
introduced to achieve deep interaction between different modalities. The
proposed method is evaluated by IEMOCAP corpus and the experimental results
show that the proposed method gives better performance than the state-of-theart
approaches
Weakly-Supervised Speech Pre-training: A Case Study on Target Speech Recognition
Self-supervised learning (SSL) based speech pre-training has attracted much
attention for its capability of extracting rich representations learned from
massive unlabeled data. On the other hand, the use of weakly-supervised data is
less explored for speech pre-training. To fill this gap, we propose a
weakly-supervised speech pre-training method based on speaker-aware speech
data. It adopts a similar training procedure to the widely-used masked speech
prediction based SSL framework, while incorporating additional target-speaker
enrollment information as an auxiliary input. In this way, the learned
representation is steered towards the target speaker even in the presence of
highly overlapping interference, allowing potential applications to tasks such
as target speech recognition. Our experiments on Libri2Mix and WSJ0-2mix
datasets show that the proposed model achieves significantly better ASR
performance compared to WavLM, the state-of-the-art SSL model with denoising
capability.Comment: Accepted by Interspeech; 5 pages, 1 figure, 3 table
LiRA: Learning Visual Speech Representations from Audio through Self-supervision
The large amount of audiovisual content being shared online today has drawn
substantial attention to the prospect of audiovisual self-supervised learning.
Recent works have focused on each of these modalities separately, while others
have attempted to model both simultaneously in a cross-modal fashion. However,
comparatively little attention has been given to leveraging one modality as a
training objective to learn from the other. In this work, we propose Learning
visual speech Representations from Audio via self-supervision (LiRA).
Specifically, we train a ResNet+Conformer model to predict acoustic features
from unlabelled visual speech. We find that this pre-trained model can be
leveraged towards word-level and sentence-level lip-reading through feature
extraction and fine-tuning experiments. We show that our approach significantly
outperforms other self-supervised methods on the Lip Reading in the Wild (LRW)
dataset and achieves state-of-the-art performance on Lip Reading Sentences 2
(LRS2) using only a fraction of the total labelled data.Comment: Accepted for publication at Interspeech 202
UFO2: A unified pre-training framework for online and offline speech recognition
In this paper, we propose a Unified pre-training Framework for Online and
Offline (UFO2) Automatic Speech Recognition (ASR), which 1) simplifies the two
separate training workflows for online and offline modes into one process, and
2) improves the Word Error Rate (WER) performance with limited utterance
annotating. Specifically, we extend the conventional offline-mode
Self-Supervised Learning (SSL)-based ASR approach to a unified manner, where
the model training is conditioned on both the full-context and dynamic-chunked
inputs. To enhance the pre-trained representation model, stop-gradient
operation is applied to decouple the online-mode objectives to the quantizer.
Moreover, in both the pre-training and the downstream fine-tuning stages, joint
losses are proposed to train the unified model with full-weight sharing for the
two modes. Experimental results on the LibriSpeech dataset show that UFO2
outperforms the SSL-based baseline method by 29.7% and 18.2% relative WER
reduction in offline and online modes, respectively.Comment: Accepted by ICASSP 202