8 research outputs found

    Ground Truth for Layout Analysis Performance Evaluation

    No full text
    Over the past two decades a significant number of layout analysis (page segmentation and region classification) approaches have been proposed in the literature. Each approach has been devised for and/or evaluated using (usually small) application-specific datasets. While the need for objective performance evaluation of layout analysis algorithms is evident, there does not exist a suitable dataset with ground truth that reflects the realities of everyday documents (widely varying layouts, complex entities, colour, noise etc.). The most significant impediment is the creation of accurate and flexible (in representation) ground truth, a task that is costly and must be carefully designed. This paper discusses the issues related to the design, representation and creation of ground truth in the context of a realistic dataset developed by the authors. The effectiveness of the ground truth discussed in this paper has been successfully shown in its use for two international page segmentation competitions (ICDAR2003 and ICDAR2005)

    On-the-fly Historical Handwritten Text Annotation

    Full text link
    The performance of information retrieval algorithms depends upon the availability of ground truth labels annotated by experts. This is an important prerequisite, and difficulties arise when the annotated ground truth labels are incorrect or incomplete due to high levels of degradation. To address this problem, this paper presents a simple method to perform on-the-fly annotation of degraded historical handwritten text in ancient manuscripts. The proposed method aims at quick generation of ground truth and correction of inaccurate annotations such that the bounding box perfectly encapsulates the word, and contains no added noise from the background or surroundings. This method will potentially be of help to historians and researchers in generating and correcting word labels in a document dynamically. The effectiveness of the annotation method is empirically evaluated on an archival manuscript collection from well-known publicly available datasets

    The Robust Reading Competition Annotation and Evaluation Platform

    Full text link
    The ICDAR Robust Reading Competition (RRC), initiated in 2003 and re-established in 2011, has become a de-facto evaluation standard for robust reading systems and algorithms. Concurrent with its second incarnation in 2011, a continuous effort started to develop an on-line framework to facilitate the hosting and management of competitions. This paper outlines the Robust Reading Competition Annotation and Evaluation Platform, the backbone of the competitions. The RRC Annotation and Evaluation Platform is a modular framework, fully accessible through on-line interfaces. It comprises a collection of tools and services for managing all processes involved with defining and evaluating a research task, from dataset definition to annotation management, evaluation specification and results analysis. Although the framework has been designed with robust reading research in mind, many of the provided tools are generic by design. All aspects of the RRC Annotation and Evaluation Framework are available for research use.Comment: 6 pages, accepted to DAS 201

    Automatic Ground-truth Generation for Document Image Analysis and Understanding

    Full text link

    Development of a tool for the construction of "ground truth" for complex color images with text

    Get PDF
    Aquesta memoria resumeix el treball de final de carrera d'Enginyeria Superior d'Informàtica. Explicarà les principals raons que han motivat el projecte així com exemples que il·lustren l'aplicació resultant. En aquest cas el software intentarà resoldre la actual necessitat que hi ha de tenir dades de Ground Truth per als algoritmes de segmentació de text per imatges de color complexes. Tots els procesos seran explicats en els diferents capítols partint de la definició del problema, la planificació, els requeriments i el disseny fins a completar la il·lustració dels resultats del programa i les dades de Ground Truth resultants.Esta memoria resume el trabajo de final de carrera de la Ingeniería Superior de Informática. Explicará las principales razones que han motivado la realización del proyecto así como ejemplos que ilustran la consecuente aplicación. En este caso se intentará resolver la actual necesidad que hay en tener datos Ground Truth para los algoritmos de segmentación de texto para imágenes de color complejas. Todos los procesos serán explicados en los diferentes capítulos partiendo de la definición del problema, la planificación, los requerimientos y el diseño, hasta una completa ilustración de los resultados del programa y de los datos de Ground Truth resultantes.This thesis summaries the work of the Computer Engineering of degree project. It will explain the main reasons to do the project as well examples that illustrated the resulting application that will try to solve the need, in this case for the creation of Ground Truth data sets for algorithms of complex color text segmentation. All the processes of the creation will explained in different chapters from the definition of problem, the work plan, the requirements and the design, to a complete illustration of the resulting software and corresponding data sets

    Adaptive Methods for Robust Document Image Understanding

    Get PDF
    A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current defficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexity- and threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy

    Modèle de dégradation d’images de documents anciens pour la génération de données semi-synthétiques

    Get PDF
    In the last two decades, the increase in document image digitization projects results in scientific effervescence for conceiving document image processing and analysis algorithms (handwritten recognition, structure document analysis, spotting and indexing / retrieval graphical elements, etc.). A number of successful algorithms are based on learning (supervised, semi-supervised or unsupervised). In order to train such algorithms and to compare their performances, the scientific community on document image analysis needs many publicly available annotated document image databases. Their contents must be exhaustive enough to be representative of the possible variations in the documents to process / analyze. To create real document image databases, one needs an automatic or a manual annotation process. The performance of an automatic annotation process is proportional to the quality and completeness of these databases, and therefore annotation remains largely manual. Regarding the manual process, it is complicated, subjective, and tedious. To overcome such difficulties, several crowd-sourcing initiatives have been proposed, and some of them being modelled as a game to be more attractive. Such processes reduce significantly the price andsubjectivity of annotation, but difficulties still exist. For example, transcription and textline alignment have to be carried out manually. Since the 1990s, alternative document image generation approaches have been proposed including in generating semi-synthetic document images mimicking real ones. Semi-synthetic document image generation allows creating rapidly and cheaply benchmarking databases for evaluating the performances and trainingdocument processing and analysis algorithms. In the context of the project DIGIDOC (Document Image diGitisation with Interactive DescriptiOn Capability) funded by ANR (Agence Nationale de la Recherche), we focus on semi-synthetic document image generation adapted to ancient documents. First, we investigate new degradation models or adapt existing degradation models to ancient documents such as bleed-through model, distortion model, character degradation model, etc. Second, we apply such degradation models to generate semi-synthetic document image databases for performance evaluation (e.g the competition ICDAR2013, GREC2013) or for performance improvement (by re-training a handwritten recognition system, a segmentation system, and a binarisation system). This research work raises many collaboration opportunities with other researchers to share our experimental results with our scientific community. This collaborative work also helps us to validate our degradation models and to prove the efficiency of semi-synthetic document images for performance evaluation and re-training.Le nombre important de campagnes de numérisation mises en place ces deux dernières décennies a entraîné une effervescence scientifique ayant mené à la création de nombreuses méthodes pour traiter et/ou analyser ces images de documents (reconnaissance d’écriture, analyse de la structure de documents, détection/indexation et recherche d’éléments graphiques, etc.). Un bon nombre de ces approches est basé sur un apprentissage (supervisé, semi supervisé ou non supervisé). Afin de pouvoir entraîner les algorithmes correspondants et en comparer les performances, la communauté scientifique a un fort besoin de bases publiques d’images de documents avec la vérité-terrain correspondante, et suffisamment exhaustive pour contenir des exemples représentatifs du contenu des documents à traiter ou analyser. La constitution de bases d’images de documents réels nécessite d’annoter les données (constituer la vérité terrain). Les performances des approches récentes d’annotation automatique étant très liées à la qualité et à l’exhaustivité des données d’apprentissage, ce processus d’annotation reste très largement manuel. Ce processus peut s’avérer complexe, subjectif et fastidieux. Afin de tenter de pallier à ces difficultés, plusieurs initiatives de crowdsourcing ont vu le jour ces dernières années, certaines sous la forme de jeux pour les rendre plus attractives. Si ce type d’initiatives permet effectivement de réduire le coût et la subjectivité des annotations, reste un certain nombre de difficultés techniques difficiles à résoudre de manière complètement automatique, par exemple l’alignement de la transcription et des lignes de texte automatiquement extraites des images. Une alternative à la création systématique de bases d’images de documents étiquetées manuellement a été imaginée dès le début des années 90. Cette alternative consiste à générer des images semi-synthétiques imitant les images réelles. La génération d’images de documents semi-synthétiques permet de constituer rapidement un volume de données important et varié, répondant ainsi aux besoins de la communauté pour l’apprentissage et l’évaluation de performances de leurs algorithmes. Dans la cadre du projet DIGIDOC (Document Image diGitisation with Interactive DescriptiOn Capability) financé par l’ANR (Agence Nationale de la Recherche), nous avons mené des travaux de recherche relatifs à la génération d’images de documents anciens semi-synthétiques. Le premier apport majeur de nos travaux réside dans la création de plusieurs modèles de dégradation permettant de reproduire de manière synthétique des déformations couramment rencontrées dans les images de documents anciens (dégradation de l’encre, déformation du papier, apparition de la transparence, etc.). Le second apport majeur de ces travaux de recherche est la mise en place de plusieurs bases d’images semi-synthétiques utilisées dans des campagnes de test (compétition ICDAR2013, GREC2013) ou pour améliorer par ré-apprentissage les résultats de méthodes de reconnaissance de caractères, de segmentation ou de binarisation. Ces travaux ont abouti sur plusieurs collaborations nationales et internationales, qui se sont soldées en particulier par plusieurs publications communes. Notre but est de valider de manière la plus objective possible, et en collaboration avec la communauté scientifique concernée, l’intérêt des images de documents anciens semi-synthétiques générées pour l’évaluation de performances et le ré-apprentissage
    corecore