1 research outputs found
Transfer Learning from Audio-Visual Grounding to Speech Recognition
Transfer learning aims to reduce the amount of data required to excel at a
new task by re-using the knowledge acquired from learning other related tasks.
This paper proposes a novel transfer learning scenario, which distills robust
phonetic features from grounding models that are trained to tell whether a pair
of image and speech are semantically correlated, without using any textual
transcripts. As semantics of speech are largely determined by its lexical
content, grounding models learn to preserve phonetic information while
disregarding uncorrelated factors, such as speaker and channel. To study the
properties of features distilled from different layers, we use them as input
separately to train multiple speech recognition models. Empirical results
demonstrate that layers closer to input retain more phonetic information, while
following layers exhibit greater invariance to domain shift. Moreover, while
most previous studies include training data for speech recognition for feature
extractor training, our grounding models are not trained on any of those data,
indicating more universal applicability to new domains.Comment: Accepted to Interspeech 2019. 4 pages, 2 figure