3 research outputs found

    Investigation of back-off based interpolation between recurrent neural network and n-gram language models

    No full text
    Recurrent neural network language models (RNNLMs) have become an increasingly popular choice for speech and language processing tasks including automatic speech recognition (ASR). As the generalization patterns of RNNLMs and n-gram LMs are inherently different, RNNLMs are usually combined with n-gram LMs via a fixed weighting based linear interpolation in state-of-the-art ASR systems. However, previous work doesn't fully exploit the difference of modelling power of the RNNLMs and n-gram LMs as n-gram level changes. In order to fully exploit the detailed n-gram level complementary attributes between the two LMs, a back-off based compact representation of n-gram dependent interpolation weights is proposed in this paper. This approach allows weight parameters to be robustly estimated on limited data. Experimental results are reported on the three tasks with varying amounts of training data. Small and consistent improvements in both perplexity and WER were obtained using the proposed interpolation approach over the baseline fixed weighting based linear interpolation

    Data underpinning "Investigation of back-off based interpolation between Recurrent Neural Network and N-Gram Language Modelsā€

    No full text
    Description of the Speech Recognition Training and Test Data and its Availability used for Experiments. Key Speech Recognition Outputs/Detailed Scoring Results used in the paper.This work was supported by the EPSRC [grant number EP/I031022/1], Toshiba Research Europe Ltd and IARPA
    corecore