7 research outputs found
Learning neural trans-dimensional random field language models with noise-contrastive estimation
Trans-dimensional random field language models (TRF LMs) where sentences are
modeled as a collection of random fields, have shown close performance with
LSTM LMs in speech recognition and are computationally more efficient in
inference. However, the training efficiency of neural TRF LMs is not
satisfactory, which limits the scalability of TRF LMs on large training corpus.
In this paper, several techniques on both model formulation and parameter
estimation are proposed to improve the training efficiency and the performance
of neural TRF LMs. First, TRFs are reformulated in the form of exponential
tilting of a reference distribution. Second, noise-contrastive estimation (NCE)
is introduced to jointly estimate the model parameters and normalization
constants. Third, we extend the neural TRF LMs by marrying the deep
convolutional neural network (CNN) and the bidirectional LSTM into the
potential function to extract the deep hierarchical features and
bidirectionally sequential features. Utilizing all the above techniques enables
the successful and efficient training of neural TRF LMs on a 40x larger
training set with only 1/3 training time and further reduces the WER with
relative reduction of 4.7% on top of a strong LSTM LM baseline.Comment: 5 pages and 2 figure
Exploring Energy-based Language Models with Different Architectures and Training Methods for Speech Recognition
Energy-based language models (ELMs) parameterize an unnormalized distribution
for natural sentences and are radically different from popular autoregressive
language models (ALMs). As an important application, ELMs have been
successfully used as a means for calculating sentence scores in speech
recognition, but they all use less-modern CNN or LSTM networks. The recent
progress in Transformer networks and large pretrained models such as BERT and
GPT2 opens new possibility to further advancing ELMs. In this paper, we explore
different architectures of energy functions and different training methods to
investigate the capabilities of ELMs in rescoring for speech recognition, all
using large pretrained models as backbones.Comment: Accepted into INTERSPEECH 202