Neural contextual biasing effectively improves automatic speech recognition
(ASR) for crucial phrases within a speaker's context, particularly those that
are infrequent in the training data. This work proposes contextual text
injection (CTI) to enhance contextual ASR. CTI leverages not only the paired
speech-text data, but also a much larger corpus of unpaired text to optimize
the ASR model and its biasing component. Unpaired text is converted into
speech-like representations and used to guide the model's attention towards
relevant bias phrases. Moreover, we introduce a contextual text-injected (CTI)
minimum word error rate (MWER) training, which minimizes the expected WER
caused by contextual biasing when unpaired text is injected into the model.
Experiments show that CTI with 100 billion text sentences can achieve up to
43.3% relative WER reduction from a strong neural biasing model. CTI-MWER
provides a further relative improvement of 23.5%.Comment: 5 pages, 1 figur