4 research outputs found
Improving accuracy of rare words for RNN-Transducer through unigram shallow fusion
End-to-end automatic speech recognition (ASR) systems, such as recurrent
neural network transducer (RNN-T), have become popular, but rare word remains a
challenge. In this paper, we propose a simple, yet effective method called
unigram shallow fusion (USF) to improve rare words for RNN-T. In USF, we
extract rare words from RNN-T training data based on unigram count, and apply a
fixed reward when the word is encountered during decoding. We show that this
simple method can improve performance on rare words by 3.7% WER relative
without degradation on general test set, and the improvement from USF is
additive to any additional language model based rescoring. Then, we show that
the same USF does not work on conventional hybrid system. Finally, we reason
that USF works by fixing errors in probability estimates of words due to
Viterbi search used during decoding with subword-based RNN-T
Improving Tail Performance of a Deliberation E2E ASR Model Using a Large Text Corpus
End-to-end (E2E) automatic speech recognition (ASR) systems lack the distinct
language model (LM) component that characterizes traditional speech systems.
While this simplifies the model architecture, it complicates the task of
incorporating text-only data into training, which is important to the
recognition of tail words that do not occur often in audio-text pairs. While
shallow fusion has been proposed as a method for incorporating a pre-trained LM
into an E2E model at inference time, it has not yet been explored for very
large text corpora, and it has been shown to be very sensitive to
hyperparameter settings in the beam search. In this work, we apply shallow
fusion to incorporate a very large text corpus into a state-of-the-art E2EASR
model. We explore the impact of model size and show that intelligent pruning of
the training set can be more effective than increasing the parameter count.
Additionally, we show that incorporating the LM in minimum word error rate
(MWER) fine tuning makes shallow fusion far less dependent on optimal
hyperparameter settings, reducing the difficulty of that tuning problem
Approaches to Improving Recognition of Underrepresented Named Entities in Hybrid ASR Systems
In this paper, we present a series of complementary approaches to improve the
recognition of underrepresented named entities (NE) in hybrid ASR systems
without compromising overall word error rate performance. The underrepresented
words correspond to rare or out-of-vocabulary (OOV) words in the training data,
and thereby can't be modeled reliably. We begin with graphemic lexicon which
allows to drop the necessity of phonetic models in hybrid ASR. We study it
under different settings and demonstrate its effectiveness in dealing with
underrepresented NEs. Next, we study the impact of neural language model (LM)
with letter-based features derived to handle infrequent words. After that, we
attempt to enrich representations of underrepresented NEs in pretrained neural
LM by borrowing the embedding representations of rich-represented words. This
let us gain significant performance improvement on underrepresented NE
recognition. Finally, we boost the likelihood scores of utterances containing
NEs in the word lattices rescored by neural LMs and gain further performance
improvement. The combination of the aforementioned approaches improves NE
recognition by up to 42% relatively
Language model fusion for streaming end to end speech recognition
Streaming processing of speech audio is required for many contemporary
practical speech recognition tasks. Even with the large corpora of manually
transcribed speech data available today, it is impossible for such corpora to
cover adequately the long tail of linguistic content that's important for tasks
such as open-ended dictation and voice search. We seek to address both the
streaming and the tail recognition challenges by using a language model (LM)
trained on unpaired text data to enhance the end-to-end (E2E) model. We extend
shallow fusion and cold fusion approaches to streaming Recurrent Neural Network
Transducer (RNNT), and also propose two new competitive fusion approaches that
further enhance the RNNT architecture. Our results on multiple languages with
varying training set sizes show that these fusion methods improve streaming
RNNT performance through introducing extra linguistic features. Cold fusion
works consistently better on streaming RNNT with up to a 8.5% WER improvement.Comment: 5 page