16,303 research outputs found
Learning neural trans-dimensional random field language models with noise-contrastive estimation
Trans-dimensional random field language models (TRF LMs) where sentences are
modeled as a collection of random fields, have shown close performance with
LSTM LMs in speech recognition and are computationally more efficient in
inference. However, the training efficiency of neural TRF LMs is not
satisfactory, which limits the scalability of TRF LMs on large training corpus.
In this paper, several techniques on both model formulation and parameter
estimation are proposed to improve the training efficiency and the performance
of neural TRF LMs. First, TRFs are reformulated in the form of exponential
tilting of a reference distribution. Second, noise-contrastive estimation (NCE)
is introduced to jointly estimate the model parameters and normalization
constants. Third, we extend the neural TRF LMs by marrying the deep
convolutional neural network (CNN) and the bidirectional LSTM into the
potential function to extract the deep hierarchical features and
bidirectionally sequential features. Utilizing all the above techniques enables
the successful and efficient training of neural TRF LMs on a 40x larger
training set with only 1/3 training time and further reduces the WER with
relative reduction of 4.7% on top of a strong LSTM LM baseline.Comment: 5 pages and 2 figure
Joint Bayesian Gaussian discriminant analysis for speaker verification
State-of-the-art i-vector based speaker verification relies on variants of
Probabilistic Linear Discriminant Analysis (PLDA) for discriminant analysis. We
are mainly motivated by the recent work of the joint Bayesian (JB) method,
which is originally proposed for discriminant analysis in face verification. We
apply JB to speaker verification and make three contributions beyond the
original JB. 1) In contrast to the EM iterations with approximated statistics
in the original JB, the EM iterations with exact statistics are employed and
give better performance. 2) We propose to do simultaneous diagonalization (SD)
of the within-class and between-class covariance matrices to achieve efficient
testing, which has broader application scope than the SVD-based efficient
testing method in the original JB. 3) We scrutinize similarities and
differences between various Gaussian PLDAs and JB, complementing the previous
analysis of comparing JB only with Prince-Elder PLDA. Extensive experiments are
conducted on NIST SRE10 core condition 5, empirically validating the
superiority of JB with faster convergence rate and 9-13% EER reduction compared
with state-of-the-art PLDA.Comment: accepted by ICASSP201
- …