10,272 research outputs found

    Enhancing Energy Minimization Framework for Scene Text Recognition with Top-Down Cues

    Get PDF
    Recognizing scene text is a challenging problem, even more so than the recognition of scanned documents. This problem has gained significant attention from the computer vision community in recent years, and several methods based on energy minimization frameworks and deep learning approaches have been proposed. In this work, we focus on the energy minimization framework and propose a model that exploits both bottom-up and top-down cues for recognizing cropped words extracted from street images. The bottom-up cues are derived from individual character detections from an image. We build a conditional random field model on these detections to jointly model the strength of the detections and the interactions between them. These interactions are top-down cues obtained from a lexicon-based prior, i.e., language statistics. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We evaluate our proposed algorithm extensively on a number of cropped scene text benchmark datasets, namely Street View Text, ICDAR 2003, 2011 and 2013 datasets, and IIIT 5K-word, and show better performance than comparable methods. We perform a rigorous analysis of all the steps in our approach and analyze the results. We also show that state-of-the-art convolutional neural network features can be integrated in our framework to further improve the recognition performance

    Text Recognition Past, Present and Future

    Get PDF
    Text recognition in various images is a research domain which attempts to develop a computer programs with a feature to read the text from images by the computer. Thus there is a need of character recognition mechanisms which results Document Image Analysis (DIA) which changes different documents in paper format computer generated electronic format. In this paper we have read and analyzed various methods for text recognition from different types of text images like scene images, text images, born digital images and text from videos. Text Recognition is an easy task for people who can read, but to make a computer that does character recognition is highly difficult task. The reasons behind this might be variability, abstraction and absence of various hard-and-fast rules that locate the appearance of a visual character in various text images. Therefore rules that is to be applied need to be very heuristically deduced from samples domain. This paper gives a review for various existing methods. The objective of this paper is to give a summary on well-known methods

    Text Extraction From Natural Scene: Methodology And Application

    Full text link
    With the popularity of the Internet and the smart mobile device, there is an increasing demand for the techniques and applications of image/video-based analytics and information retrieval. Most of these applications can benefit from text information extraction in natural scene. However, scene text extraction is a challenging problem to be solved, due to cluttered background of natural scene and multiple patterns of scene text itself. To solve these problems, this dissertation proposes a framework of scene text extraction. Scene text extraction in our framework is divided into two components, detection and recognition. Scene text detection is to find out the regions containing text from camera captured images/videos. Text layout analysis based on gradient and color analysis is performed to extract candidates of text strings from cluttered background in natural scene. Then text structural analysis is performed to design effective text structural features for distinguishing text from non-text outliers among the candidates of text strings. Scene text recognition is to transform image-based text in detected regions into readable text codes. The most basic and significant step in text recognition is scene text character (STC) prediction, which is multi-class classification among a set of text character categories. We design robust and discriminative feature representations for STC structure, by integrating multiple feature descriptors, coding/pooling schemes, and learning models. Experimental results in benchmark datasets demonstrate the effectiveness and robustness of our proposed framework, which obtains better performance than previously published methods. Our proposed scene text extraction framework is applied to 4 scenarios, 1) reading print labels in grocery package for hand-held object recognition; 2) combining with car detection to localize license plate in camera captured natural scene image; 3) reading indicative signage for assistant navigation in indoor environments; and 4) combining with object tracking to perform scene text extraction in video-based natural scene. The proposed prototype systems and associated evaluation results show that our framework is able to solve the challenges in real applications

    Text Extraction System From High As Well As Low Resolution Natural Scene Images

    Get PDF
    In this paper, we propose efficient and sturdy technique for investigating texts in natural scene footage. A fast and effective pruning formula is designed to extract Maximally Stable External Regions (MSERs) as character candidate’s victimization the strategy of minimizing regularized variations. Character candidates form into text candidates by the single-link clump formula, wherever distance weights and clump threshold unit of measurement learned by a completely distinctive self-training distance metric learning formula. The probabilities of text candidates like non-text unit of measurement estimable with a temperament classifier. Text candidates with high non-text probabilities unit of density eliminated and texts unit of measurement acknowledged with a document classifier. Text find in natural scene footage is also an important for several content-based image resolve. Experiments on polyglot, street browse; multi-direction and even born-digital databases conjointly demonstrate the effectiveness of the reposed technique

    A Heuristic Baseline Method for Metadata Extraction from Scanned Electronic Theses and Dissertations

    Get PDF
    Extracting metadata from scholarly papers is an important text mining problem. Widely used open-source tools such as GROBID are designed for born-digital scholarly papers but often fail for scanned documents, such as Electronic Theses and Dissertations (ETDs). Here we present a preliminary baseline work with a heuristic model to extract metadata from the cover pages of scanned ETDs. The process started with converting scanned pages into images and then text files by applying OCR tools. Then a series of carefully designed regular expressions for each field is applied, capturing patterns for seven metadata fields: titles, authors, years, degrees, academic programs, institutions, and advisors. The method is evaluated on a ground truth dataset comprised of rectified metadata provided by the Virginia Tech and MIT libraries. Our heuristic method achieves an accuracy of up to 97% on the fields of the ETD text files. Our method poses a strong baseline for machine learning based methods. To our best knowledge, this is the first work attempting to extract metadata from non-born-digital ETDs

    An Efficient and Robust Method for Text Detection in Low and High Resolution Natural Scene Images

    Get PDF
    In this paper, we propose efficient and sturdy technique for investigating texts in natural scene footage. A fast and effective pruning formula is designed to extract Maximally Stable External Regions (MSERs) as character candidate’s victimization the strategy of minimizing regularized variations. Character candidates form into text candidates by the single-link clump formula, wherever distance weights and clump threshold unit of measurement learned by a completely distinctive self-training distance metric learning formula. The probabilities of text candidates like non-text unit of measurement estimable with a temperament classifier. Text candidates with high non-text probabilities unit of density eliminated and texts unit of measurement acknowledged with a document classifier. Text find in natural scene footage is also an important for several content-based image resolve. Experiments on polyglot, street browse; multi-direction and even born-digital databases conjointly demonstrate the effectiveness of the reposed technique. DOI: 10.17762/ijritcc2321-8169.15078

    Automatic Metadata Extraction Incorporating Visual Features from Scanned Electronic Theses and Dissertations

    Get PDF
    Electronic Theses and Dissertations (ETDs) contain domain knowledge that can be used for many digital library tasks, such as analyzing citation networks and predicting research trends. Automatic metadata extraction is important to build scalable digital library search engines. Most existing methods are designed for born-digital documents, so they often fail to extract metadata from scanned documents such as for ETDs. Traditional sequence tagging methods mainly rely on text-based features. In this paper, we propose a conditional random field (CRF) model that combines text-based and visual features. To verify the robustness of our model, we extended an existing corpus and created a new ground truth corpus consisting of 500 ETD cover pages with human validated metadata. Our experiments show that CRF with visual features outperformed both a heuristic and a CRF model with only text-based features. The proposed model achieved 81.3%-96% F1 measure on seven metadata fields. The data and source code are publicly available on Google Drive (https://tinyurl.com/y8kxzwrp) and a GitHub repository (https://github.com/lamps-lab/ETDMiner/tree/master/etd_crf), respectively.Comment: 7 pages, 4 figures, 1 table. Accepted by JCDL '21 as a short pape
    • …
    corecore