4,513 research outputs found
Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages
Lyrics alignment gained considerable attention in recent years.
State-of-the-art systems either re-use established speech recognition toolkits,
or design end-to-end solutions involving a Connectionist Temporal
Classification (CTC) loss. However, both approaches suffer from specific
weaknesses: toolkits are known for their complexity, and CTC systems use a loss
designed for transcription which can limit alignment accuracy. In this paper,
we use instead a contrastive learning procedure that derives cross-modal
embeddings linking the audio and text domains. This way, we obtain a novel
system that is simple to train end-to-end, can make use of weakly annotated
training data, jointly learns a powerful text model, and is tailored to
alignment. The system is not only the first to yield an average absolute error
below 0.2 seconds on the standard Jamendo dataset but it is also robust to
other languages, even when trained on English data only. Finally, we release
word-level alignments for the JamendoLyrics Multi-Lang dataset.Comment: 5 pages, accepted at the International Conference on Acoustics,
Speech, and Signal Processing (ICASSP) 202
TheanoLM - An Extensible Toolkit for Neural Network Language Modeling
We present a new tool for training neural network language models (NNLMs),
scoring sentences, and generating text. The tool has been written using Python
library Theano, which allows researcher to easily extend it and tune any aspect
of the training process. Regardless of the flexibility, Theano is able to
generate extremely fast native code that can utilize a GPU or multiple CPU
cores in order to parallelize the heavy numerical computations. The tool has
been evaluated in difficult Finnish and English conversational speech
recognition tasks, and significant improvement was obtained over our best
back-off n-gram models. The results that we obtained in the Finnish task were
compared to those from existing RNNLM and RWTHLM toolkits, and found to be as
good or better, while training times were an order of magnitude shorter
- …