3 research outputs found

    Neural network language models for off-line handwriting recognition

    Full text link
    [EN] Unconstrained off-line continuous handwritten text recognition is a very challenging task which has been recently addressed by different promising techniques. This work presents our latest contribution to this task, integrating neural network language models in the decoding process of three state-of-the-art systems: one based on bidirectional recurrent neural networks, another based on hybrid hidden Markov models and, finally, a combination of both. Experimental results obtained on the IAM off-line database demonstrate that consistent word error rate reductions can be achieved with neural network language models when compared with statistical N-gram language models on the three tested systems. The best word error rate, 16.1%, reported with ROVER combination of systems using neural network language models significantly outperforms current benchmark results for the IAM database.The authors wish to acknowledge the anonymous reviewers for their detailed and helpful comments to the paper. We also thank Alex Graves for kindly providing us with the BLSTM Neural Network source code. This work has been supported by the European project FP7-PEOPLE-2008-IAPP: 230653, the Spanish Government under project TIN2010-18958, as well as by the Swiss National Science Foundation (Project CRSI22_125220).Zamora Martínez, FJ.; Frinken, V.; España Boquera, S.; Castro-Bleda, MJ.; Fischer, A.; Bunke, H. (2014). Neural network language models for off-line handwriting recognition. Pattern Recognition. 47(4):1642-1652. https://doi.org/10.1016/j.patcog.2013.10.020S1642165247
    corecore