801 research outputs found

    Unsupervised Adaptation for Synthetic-to-Real Handwritten Word Recognition

    Full text link
    Handwritten Text Recognition (HTR) is still a challenging problem because it must deal with two important difficulties: the variability among writing styles, and the scarcity of labelled data. To alleviate such problems, synthetic data generation and data augmentation are typically used to train HTR systems. However, training with such data produces encouraging but still inaccurate transcriptions in real words. In this paper, we propose an unsupervised writer adaptation approach that is able to automatically adjust a generic handwritten word recognizer, fully trained with synthetic fonts, towards a new incoming writer. We have experimentally validated our proposal using five different datasets, covering several challenges (i) the document source: modern and historic samples, which may involve paper degradation problems; (ii) different handwriting styles: single and multiple writer collections; and (iii) language, which involves different character combinations. Across these challenging collections, we show that our system is able to maintain its performance, thus, it provides a practical and generic approach to deal with new document collections without requiring any expensive and tedious manual annotation step.Comment: Accepted to WACV 202

    Sparse arrays of signatures for online character recognition

    Full text link
    In mathematics the signature of a path is a collection of iterated integrals, commonly used for solving differential equations. We show that the path signature, used as a set of features for consumption by a convolutional neural network (CNN), improves the accuracy of online character recognition---that is the task of reading characters represented as a collection of paths. Using datasets of letters, numbers, Assamese and Chinese characters, we show that the first, second, and even the third iterated integrals contain useful information for consumption by a CNN. On the CASIA-OLHWDB1.1 3755 Chinese character dataset, our approach gave a test error of 3.58%, compared with 5.61% for a traditional CNN [Ciresan et al.]. A CNN trained on the CASIA-OLHWDB1.0-1.2 datasets won the ICDAR2013 Online Isolated Chinese Character recognition competition. Computationally, we have developed a sparse CNN implementation that make it practical to train CNNs with many layers of max-pooling. Extending the MNIST dataset by translations, our sparse CNN gets a test error of 0.31%.Comment: 10 pages, 2 figure
    • …
    corecore