5 research outputs found

    A Fast Alignment Scheme for Automatic OCR Evaluation of Books

    Full text link
    This paper aims to evaluate the accuracy of optical character recognition (OCR) systems on real scanned books. The ground truth e-texts are obtained from the Project Gutenberg website and aligned with their corresponding OCR output using a fast recursive text alignment scheme (RETAS). First, unique words in the vocabulary of the book are aligned with unique words in the OCR output. This process is recursively applied to each text segment in between matching unique words until the text segments become very small. In the final stage, an edit distance based alignment algorithm is used to align these short chunks of texts to generate the final alignment. The proposed approach effectively segments the alignment problem into small subproblems which in turn yields dramatic time savings even when there are large pieces of inserted or deleted text and the OCR accuracy is poor. This approach is used to evaluate the OCR accuracy of real scanned books in English, French, German and Spanish

    Improving OCR Post Processing with Machine Learning Tools

    Full text link
    Optical Character Recognition (OCR) Post Processing involves data cleaning steps for documents that were digitized, such as a book or a newspaper article. One step in this process is the identification and correction of spelling and grammar errors generated due to the flaws in the OCR system. This work is a report on our efforts to enhance the post processing for large repositories of documents. The main contributions of this work are: • Development of tools and methodologies to build both OCR and ground truth text correspondence for training and testing of proposed techniques in our experiments. In particular, we will explain the alignment problem and tackle it with our de novo algorithm that has shown a high success rate. • Exploration of the Google Web 1T corpus to correct errors using context. We show that over half of the errors in the OCR text can be detected and corrected. • Applications of machine learning tools to generalize the past ad hoc approaches to OCR error corrections. As an example, we investigate the use of logistic regression to select the correct replacement for misspellings in the OCR text. • Use of container technology to address the state of reproducible research in OCR and Computer Science as a whole. Many of the past experiments in the field of OCR are not considered reproducible research questioning whether the original results were outliers or finessed

    Text Detection in Natural Scenes and Technical Diagrams with Convolutional Feature Learning and Cascaded Classification

    Get PDF
    An enormous amount of digital images are being generated and stored every day. Understanding text in these images is an important challenge with large impacts for academic, industrial and domestic applications. Recent studies address the difficulty of separating text targets from noise and background, all of which vary greatly in natural scenes. To tackle this problem, we develop a text detection system to analyze and utilize visual information in a data driven, automatic and intelligent way. The proposed method incorporates features learned from data, including patch-based coarse-to-fine detection (Text-Conv), connected component extraction using region growing, and graph-based word segmentation (Word-Graph). Text-Conv is a sliding window-based detector, with convolution masks learned using the Convolutional k-means algorithm (Coates et. al, 2011). Unlike convolutional neural networks (CNNs), a single vector/layer of convolution mask responses are used to classify patches. An initial coarse detection considers both local and neighboring patch responses, followed by refinement using varying aspect ratios and rotations for a smaller local detection window. Different levels of visual detail from ground truth are utilized in each step, first using constraints on bounding box intersections, and then a combination of bounding box and pixel intersections. Combining masks from different Convolutional k-means initializations, e.g., seeded using random vectors and then support vectors improves performance. The Word-Graph algorithm uses contextual information to improve word segmentation and prune false character detections based on visual features and spatial context. Our system obtains pixel, character, and word detection f-measures of 93.14%, 90.26%, and 86.77% respectively for the ICDAR 2015 Robust Reading Focused Scene Text dataset, out-performing state-of-the-art systems, and producing highly accurate text detection masks at the pixel level. To investigate the utility of our feature learning approach for other image types, we perform tests on 8- bit greyscale USPTO patent drawing diagram images. An ensemble of Ada-Boost classifiers with different convolutional features (MetaBoost) is used to classify patches as text or background. The Tesseract OCR system is used to recognize characters in detected labels and enhance performance. With appropriate pre-processing and post-processing, f-measures of 82% for part label location, and 73% for valid part label locations and strings are obtained, which are the best obtained to-date for the USPTO patent diagram data set used in our experiments. To sum up, an intelligent refinement of convolutional k-means-based feature learning and novel automatic classification methods are proposed for text detection, which obtain state-of-the-art results without the need for strong prior knowledge. Different ground truth representations along with features including edges, color, shape and spatial relationships are used coherently to improve accuracy. Different variations of feature learning are explored, e.g. support vector-seeded clustering and MetaBoost, with results suggesting that increased diversity in learned features benefit convolution-based text detectors

    Reconnaissance de documents assistée: architecture logicielle et intégration de savoir-faire

    Get PDF
    Cette thèse aborde la reconnaissance de documents suivant une approche assistée, qui vise à exploiter au mieux les compétences respectives de l’homme et de la machine. Nos contributions portent notamment sur les questions d’architecture logicielle soulevées par la mise en oeuvre de systèmes de reconnaissance de documents. Les avantages d’un environnement coopératif sont motivés par une analyse critique des systèmes actuels, et une projection sur les futures applications de la reconnaissance de documents. Diverses propositions concrètes sont émises sur la conduite du dialogue homme-machine, ainsi que sur les possibilités d’amélioration à l’usage. L’inventaire des données à gérer dans un système de reconnaissance est organisé de façon modulaire et homogène, et représenté à l’aide du format standard DAFS Sur le plan du contrôle, le système est décomposé selon une modélisation multi-agents. Cette découpe conceptuelle est alors simulée dans notre plateforme de développement, qui repose sur la programmation concurrente, distribuée, et multi-langages. Une solution expressive est proposée pour le couplage entre le noyau de l’application et l’interface graphique. Le prototype qui a servi à valider l’architecture est présenté. Notre architecture logicielle encourage l’exploitation du savoir-faire typographique, par l’intermédiaire d’un support de fontes standardisé. Ce rapprochement entre les deux disciplines profite à la fois à l’ergonomie, à la valorisation des résultats de reconnaissance, et aux méthodes d’analyse automatiques. Nous présentons une poignée d’analyseurs originaux, pour des tâches de reconnaissance de caractères, d’identification des fontes, ou de segmentation. Les expériences conduites en guise de première évaluation démontrent l’utilité potentielle de nos outils d’analyse. Par ailleurs, une contribution est apportée au problème de l’évaluation des performances de systèmes de reconnaissance assistée, avec l’introduction d’un nouveau modèle de coûts. Celui-ci intègre l’influence du comportement de l’utilisateur, de même que l’amélioration des performances liée au phénomène d’apprentissage incrémental. Notre modèle de coûts est utilisé dans des simulations, ainsi que dans des expériences mettant en jeu des analyseurs existants. Les observations mettent en évidence la dynamique particulière des systèmes assistés par rapport aux approches entièrement automatiques.This thesis addresses the question of document recognition with an assisted perspective advocating an adequate combination between human and machine capabilities. Our contributions tackle various aspects of the underlying software architecture. Both a study of existing systems and a projection on some future applications of document recognition illustrate the need of cooperative environments. Several mechanisms are proposed to drive the human-machine dialog or to make the recognition systems able to improve with use. The various data involved in a recognition system are organized in a modular and homogeneous way. The whole information is represented using the DAFS standard format. In our proposition, the control is decentralized according to a multi-agent modelling. This conceptual scheme is then simulated on our development platform, using concurrent, distributed, and multi-languages programming. An expressive solution is proposed for the coupling between the application kernel and a graphical user interface. A prototype is realized to validate the whole architecture. Our software architecture takes advantage of the typographical know-how, through the use of a standardized font management support. This integrated approach lets us enhance the ergonomy, extend the possible use of the recognition results, and redefine some recognition techniques. A few innovative analyzers are described in the field of optical character recognition, font identification, or segmentation. The first experiments show that our simple methods behave surprisingly well, with respect to what can be expected from the state of the art. Besides, we bring a contribution to the problem of measuring the performance of cooperative recognition systems, through the introduction of a new cost model. Our notations are able to describe assisted recognition scenarios, where the user takes part in the process, and where the accuracy is modified dynamically thanks to incremental learning. Our cost model is used both in simulations and in experiments implying existing analyzers. The dynamic aspects of assisted systems can then be observed

    Adaptive Analysis and Processing of Structured Multilingual Documents

    Get PDF
    Digital document processing is becoming popular for application to office and library automation, bank and postal services, publishing houses and communication management. In recent years, the demand for tools capable of searching written and spoken sources of multilingual information has increased tremendously, where the bilingual dictionary is one of the important resource to provide the required information. Processing and analysis of bilingual dictionaries brought up the challenges of dealing with many different scripts, some of which are unknown to the designer. A framework is presented to adaptively analyze and process structured multilingual documents, where adaptability is applied to every step. The proposed framework involves: (1) General word-level script identification using Gabor filter. (2) Font classification using the grating cell operator. (3) General word-level style identification using Gaussian mixture model. (4) An adaptable Hindi OCR based on generalized Hausdorff image comparison. (5) Retargetable OCR with automatic training sample creation and its applications to different scripts. (6) Bootstrapping entry segmentation, which segments each page into functional entries for parsing. Experimental results working on different scripts, such as Chinese, Korean, Arabic, Devanagari, and Khmer, demonstrate that the proposed framework can save human efforts significantly by making each phase adaptive
    corecore