1 research outputs found
Contextualized Spoken Word Representations from Convolutional Autoencoders
A lot of work has been done to build text-based language models for
performing different NLP tasks, but not much research has been done in the case
of audio-based language models. This paper proposes a Convolutional Autoencoder
based neural architecture to model syntactically and semantically adequate
contextualized representations of varying length spoken words. The use of such
representations can not only lead to great advances in the audio-based NLP
tasks but can also curtail the loss of information like tone, expression,
accent, etc while converting speech to text to perform these tasks. The
performance of the proposed model is validated by (1) examining the generated
vector space, and (2) evaluating its performance on three benchmark datasets
for measuring word similarities, against existing widely used text-based
language models that are trained on the transcriptions. The proposed model was
able to demonstrate its robustness when compared to the other two
language-based models