9,618 research outputs found

    A convolutional neural network based Chinese text detection algorithm via text structure modeling

    Get PDF
    Text detection in natural scene environment plays an important role in many computer vision applications. While existing text detection methods are focused on English characters, there is strong application demands on text detection in other languages, such as Chinese. As Chinese characters are much more complex than English characters, innovative and more efficient text detection techniques are required for Chinese texts. In this paper, we present a novel text detection algorithm for Chinese characters based on a specific designed convolutional neural network (CNN). The CNN model contains a text structure component detector layer, a spatial pyramid layer and a multi-input-layer deep belief network (DBN). The CNN is pretrained via a convolutional sparse auto-encoder (CSAE) in an unsupervised way, which is specifically designed for extracting complex features from Chinese characters. In particular, the text structure component detectors enhance the accuracy and uniqueness of feature descriptors by extracting multiple text structure components in various ways. The spatial pyramid layer is then introduced to enhance the scale invariability of the CNN model for detecting texts in multiple scales. Finally, the multi-input-layer DBN is used as the fully connected layers in the CNN model to ensure that features from multiple scales are comparable. A multilingual text detection dataset, in which texts in Chinese, English and digits are labeled separately, is set up to evaluate the proposed text detection algorithm. The proposed algorithm shows a significant 10% performance improvement over the baseline CNN algorithms. In addition the proposed algorithm is evaluated over a public multilingual image benchmark and achieves state-of-the-art results for text detection under multiple languages. Furthermore a simplified version of the proposed algorithm with only general components is compared to existing general text detection algorithms on the ICDAR 2011 and 2013 datasets, showing comparable detection performance to the existing algorithms

    A fine-grained approach to scene text script identification

    Full text link
    This paper focuses on the problem of script identification in unconstrained scenarios. Script identification is an important prerequisite to recognition, and an indispensable condition for automatic text understanding systems designed for multi-language environments. Although widely studied for document images and handwritten documents, it remains an almost unexplored territory for scene text images. We detail a novel method for script identification in natural images that combines convolutional features and the Naive-Bayes Nearest Neighbor classifier. The proposed framework efficiently exploits the discriminative power of small stroke-parts, in a fine-grained classification framework. In addition, we propose a new public benchmark dataset for the evaluation of joint text detection and script identification in natural scenes. Experiments done in this new dataset demonstrate that the proposed method yields state of the art results, while it generalizes well to different datasets and variable number of scripts. The evidence provided shows that multi-lingual scene text recognition in the wild is a viable proposition. Source code of the proposed method is made available online

    WordFences: Text localization and recognition

    Get PDF
    En col·laboració amb la Universitat de Barcelona (UB) i la Universitat Rovira i Virgili (URV)In recent years, text recognition has achieved remarkable success in recognizing scanned document text. However, word recognition in natural images is still an open problem, which generally requires time consuming post-processing steps. We present a novel architecture for individual word detection in scene images based on semantic segmentation. Our contributions are twofold: the concept of WordFence, which detects border areas surrounding each individual word and a unique pixelwise weighted softmax loss function which penalizes background and emphasizes small text regions. WordFence ensures that each word is detected individually, and the new loss function provides a strong training signal to both text and word border localization. The proposed technique avoids intensive post-processing by combining semantic word segmentation with a voting scheme for merging segmentations of multiple scales, producing an end-to-end word detection system. We achieve superior localization recall on common benchmark datasets - 92% recall on ICDAR11 and ICDAR13 and 63% recall on SVT. Furthermore, end-to-end word recognition achieves state-of-the-art 86% F-Score on ICDAR13
    • …
    corecore