238,196 research outputs found
AKVSR: Audio Knowledge Empowered Visual Speech Recognition by Compressing Audio Knowledge of a Pretrained Model
Visual Speech Recognition (VSR) is the task of predicting spoken words from
silent lip movements. VSR is regarded as a challenging task because of the
insufficient information on lip movements. In this paper, we propose an Audio
Knowledge empowered Visual Speech Recognition framework (AKVSR) to complement
the insufficient speech information of visual modality by using audio modality.
Different from the previous methods, the proposed AKVSR 1) utilizes rich audio
knowledge encoded by a large-scale pretrained audio model, 2) saves the
linguistic information of audio knowledge in compact audio memory by discarding
the non-linguistic information from the audio through quantization, and 3)
includes Audio Bridging Module which can find the best-matched audio features
from the compact audio memory, which makes our training possible without audio
inputs, once after the compact audio memory is composed. We validate the
effectiveness of the proposed method through extensive experiments, and achieve
new state-of-the-art performances on the widely-used datasets, LRS2 and LRS3
Hybrid Fusion Based Interpretable Multimodal Emotion Recognition with Limited Labelled Data
This paper proposes a multimodal emotion recognition system, VIsual Spoken
Textual Additive Net (VISTA Net), to classify emotions reflected by multimodal
input containing image, speech, and text into discrete classes. A new
interpretability technique, K-Average Additive exPlanation (KAAP), has also
been developed that identifies important visual, spoken, and textual features
leading to predicting a particular emotion class. The VISTA Net fuses
information from image, speech, and text modalities using a hybrid of early and
late fusion. It automatically adjusts the weights of their intermediate outputs
while computing the weighted average. The KAAP technique computes the
contribution of each modality and corresponding features toward predicting a
particular emotion class. To mitigate the insufficiency of multimodal emotion
datasets labeled with discrete emotion classes, we have constructed a
large-scale IIT-R MMEmoRec dataset consisting of images, corresponding speech
and text, and emotion labels ('angry,' 'happy,' 'hate,' and 'sad'). The VISTA
Net has resulted in 95.99\% emotion recognition accuracy on the IIT-R MMEmoRec
dataset on using visual, audio, and textual modalities, outperforming when
using any one or two modalities
Visual Speech Recognition for Languages with Limited Labeled Data using Automatic Labels from Whisper
This paper proposes a powerful Visual Speech Recognition (VSR) method for
multiple languages, especially for low-resource languages that have a limited
number of labeled data. Different from previous methods that tried to improve
the VSR performance for the target language by using knowledge learned from
other languages, we explore whether we can increase the amount of training data
itself for the different languages without human intervention. To this end, we
employ a Whisper model which can conduct both language identification and
audio-based speech recognition. It serves to filter data of the desired
languages and transcribe labels from the unannotated, multilingual audio-visual
data pool. By comparing the performances of VSR models trained on automatic
labels and the human-annotated labels, we show that we can achieve similar VSR
performance to that of human-annotated labels even without utilizing human
annotations. Through the automated labeling process, we label large-scale
unlabeled multilingual databases, VoxCeleb2 and AVSpeech, producing 1,002 hours
of data for four low VSR resource languages, French, Italian, Spanish, and
Portuguese. With the automatic labels, we achieve new state-of-the-art
performance on mTEDx in four languages, significantly surpassing the previous
methods. The automatic labels are available online:
https://github.com/JeongHun0716/Visual-Speech-Recognition-for-Low-Resource-LanguagesComment: Accepted at ICASSP 202
Transformer-Based Video Front-Ends for Audio-Visual Speech Recognition for Single and Multi-Person Video
Audio-visual automatic speech recognition (AV-ASR) extends speech recognition
by introducing the video modality as an additional source of information. In
this work, the information contained in the motion of the speaker's mouth is
used to augment the audio features. The video modality is traditionally
processed with a 3D convolutional neural network (e.g. 3D version of VGG).
Recently, image transformer networks arXiv:2010.11929 demonstrated the ability
to extract rich visual features for image classification tasks. Here, we
propose to replace the 3D convolution with a video transformer to extract
visual features. We train our baselines and the proposed model on a large scale
corpus of YouTube videos. The performance of our approach is evaluated on a
labeled subset of YouTube videos as well as on the LRS3-TED public corpus. Our
best video-only model obtains 31.4% WER on YTDEV18 and 17.0% on LRS3-TED, a 10%
and 15% relative improvements over our convolutional baseline. We achieve the
state of the art performance of the audio-visual recognition on the LRS3-TED
after fine-tuning our model (1.6% WER). In addition, in a series of experiments
on multi-person AV-ASR, we obtained an average relative reduction of 2% over
our convolutional video frontend.Comment: 5 pages, 3 figures, published at Interspeech 202
- …