Article thumbnail

Fast N-Gram Language Model Look-Ahead for Decoders With Static Pronunciation Prefix Trees

By Marijn Huijbregts and Franciska De Jong

Abstract

Decoders that make use of token-passing restrict their search space by various types of token pruning. With use of the Language Model Look-Ahead (LMLA) technique it is possible to increase the number of tokens that can be pruned without loss of decoding precision. Unfortunately, for token passing decoders that use single static pronunciation prefix trees, full n-gram LMLA increases the needed number of language model probability calculations considerably. In this paper a method for applying full n-gram LMLA in a decoder with a single static pronunciation tree is introduced. The experiments show that this method improves the speed of the decoder without an increase of search errors

Topics: Index Terms, Automatic speech recognition, decoding
Year: 2012
OAI identifier: oai:CiteSeerX.psu:10.1.1.216.1424
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://eprints.eemcs.utwente.n... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.