1,418 research outputs found

    Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition

    Full text link
    In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine -- synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one "reading" words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs

    Unsupervised Adaptation for Synthetic-to-Real Handwritten Word Recognition

    Full text link
    Handwritten Text Recognition (HTR) is still a challenging problem because it must deal with two important difficulties: the variability among writing styles, and the scarcity of labelled data. To alleviate such problems, synthetic data generation and data augmentation are typically used to train HTR systems. However, training with such data produces encouraging but still inaccurate transcriptions in real words. In this paper, we propose an unsupervised writer adaptation approach that is able to automatically adjust a generic handwritten word recognizer, fully trained with synthetic fonts, towards a new incoming writer. We have experimentally validated our proposal using five different datasets, covering several challenges (i) the document source: modern and historic samples, which may involve paper degradation problems; (ii) different handwriting styles: single and multiple writer collections; and (iii) language, which involves different character combinations. Across these challenging collections, we show that our system is able to maintain its performance, thus, it provides a practical and generic approach to deal with new document collections without requiring any expensive and tedious manual annotation step.Comment: Accepted to WACV 202

    Learning with Weak Annotations for Text in the Wild Detection and Recognition

    Get PDF
    V této práci představujeme metodu využívající slabě anotované obrázky pro vylepšení systémů pro extrakci textu. Slabá antoace spočívá v seznamu textů, které se v daném obrázku mohou vyskytovat, ale nevíme kde. Metoda používá libovolný existující systém pro rozpoznávání textu k získání oblastí, kde se pravděpodobně vyskytuje text, spolu s ne nutně správným přepisem. Výsledkem procesu zahrnujícího párování nepřesných přepisů se slabými anotacemi a prohledávání okolí vedené Levenshtein vzdáleností jsou skoro bezchybně lokalizované texty, se kterými dále zacházíme jako s pseudo-anotacemi využívanými k učení. Aplikování metody na dva slabě anotované datasety a doučení použitého systému pomocí získaných pseudo-anotací ukazuje, že námi navržený proces konzistentně zlepšuje přesnost rozpoznávání na různých datasetech (jiných doménách) běžně využívaných k testování a velmi výrazně zvyšuje přesnost na stejném datasetu. Metodu lze použít iterativně.In this work, we present a method for exploiting weakly annotated images to improve text extraction pipelines. The weak annotation of an image is a list of texts that are likely to appear in the image without any information about the location. An arbitrary existing end-to-end text recognition system is used to obtain text region proposals and their, possibly erroneous, transcriptions. A process that includes imprecise transcription to annotation matching and edit distance guided neighbourhood search produces nearly error-free, localised instances of scene text, which we treat as ``pseudo ground truth'' used for training. We apply the method to two weakly-annotated datasets and use the obtained pseudo ground truth to re-train the end-to-end system. The process consistently improves the accuracy of a state of the art recognition model across different benchmark datasets (image domains) as well as providing a significant performance boost on the same dataset, further improving when applied iteratively
    • …
    corecore