4 research outputs found

    Conversion of NNLM to Back-off language model in ASR

    Get PDF
    In daily life, automatic speech recognition is one of the aspect which is widely used for security system. To convert speech into text using neural network, Language model is one of the block on which efficiency of speech recognition depends. In this paper we developed an algorithm to convert Neural Network Language model (NNLM) to Back-off language model for more efficient decoding. For large vocabulary system this conversion gives more efficient result. Efficiency of language model depends on perplexity and Word Error Rate (WER

    The use of a linguistically motivated language model in conversational speech recognition

    No full text
    Structured language models have recently been shown to give significant improvements in large-vocabulary recognition relative to traditional word N-gram models, but typically imply a heavy computational burden and have not been applied to large training sets or complex recognition systems. In previous work, we developed a linguistically motivated and computationally efficient almostparsing language model using a data structure derived from Constraint Dependency Grammar parses that tightly integrates knowledge of words, lexical features, and syntactic constraints. In this paper we show that such a model can be used effectively and efficiently in all stages of a complex, multi-pass conversational telephone speech recognition system. Compared to a state-of-the-art 4-gram interpolated word- and class-based language model, we obtained a 6.2 % relative word error reduction (a 1.6 % absolute reduction) on a recent NIST evaluation set. 1
    corecore