4 research outputs found
Transfer Learning from Audio-Visual Grounding to Speech Recognition
Transfer learning aims to reduce the amount of data required to excel at a
new task by re-using the knowledge acquired from learning other related tasks.
This paper proposes a novel transfer learning scenario, which distills robust
phonetic features from grounding models that are trained to tell whether a pair
of image and speech are semantically correlated, without using any textual
transcripts. As semantics of speech are largely determined by its lexical
content, grounding models learn to preserve phonetic information while
disregarding uncorrelated factors, such as speaker and channel. To study the
properties of features distilled from different layers, we use them as input
separately to train multiple speech recognition models. Empirical results
demonstrate that layers closer to input retain more phonetic information, while
following layers exhibit greater invariance to domain shift. Moreover, while
most previous studies include training data for speech recognition for feature
extractor training, our grounding models are not trained on any of those data,
indicating more universal applicability to new domains.Comment: Accepted to Interspeech 2019. 4 pages, 2 figure
Leveraging Pretrained Image-text Models for Improving Audio-Visual Learning
Visually grounded speech systems learn from paired images and their spoken
captions. Recently, there have been attempts to utilize the visually grounded
models trained from images and their corresponding text captions, such as CLIP,
to improve speech-based visually grounded models' performance. However, the
majority of these models only utilize the pretrained image encoder. Cascaded
SpeechCLIP attempted to generate localized word-level information and utilize
both the pretrained image and text encoders. Despite using both, they noticed a
substantial drop in retrieval performance. We proposed Segmental SpeechCLIP
which used a hierarchical segmental speech encoder to generate sequences of
word-like units. We used the pretrained CLIP text encoder on top of these
word-like unit representations and showed significant improvements over the
cascaded variant of SpeechCLIP. Segmental SpeechCLIP directly learns the word
embeddings as input to the CLIP text encoder bypassing the vocabulary
embeddings. Here, we explore mapping audio to CLIP vocabulary embeddings via
regularization and quantization. As our objective is to distill semantic
information into the speech encoders, we explore the usage of large unimodal
pretrained language models as the text encoders. Our method enables us to
bridge image and text encoders e.g. DINO and RoBERTa trained with uni-modal
data. Finally, we extend our framework in audio-only settings where only pairs
of semantically related audio are available. Experiments show that audio-only
systems perform close to the audio-visual system