1,215 research outputs found

    Automatic estimation of the readability of handwritten text

    Get PDF
    Publication in the conference proceedings of EUSIPCO, Lausanne, Switzerland, 200

    Learning Surrogate Models of Document Image Quality Metrics for Automated Document Image Processing

    Full text link
    Computation of document image quality metrics often depends upon the availability of a ground truth image corresponding to the document. This limits the applicability of quality metrics in applications such as hyperparameter optimization of image processing algorithms that operate on-the-fly on unseen documents. This work proposes the use of surrogate models to learn the behavior of a given document quality metric on existing datasets where ground truth images are available. The trained surrogate model can later be used to predict the metric value on previously unseen document images without requiring access to ground truth images. The surrogate model is empirically evaluated on the Document Image Binarization Competition (DIBCO) and the Handwritten Document Image Binarization Competition (H-DIBCO) datasets

    XDOCS: An Application to Index Historical Documents

    Get PDF
    Dematerialization and digitalization of historical documents are key elements for their availability, preservation and diffusion. Unfortunately, the conversion from handwritten to digitalized documents presents several technical challenges. The XDOCS project is created with the main goal of making available and extending the usability of historical documents for a great variety of audience, like scholars, institutions and libraries. In this paper the core elements of XDOCS, i.e. page dewarping and word spotting technique, are described and two new applications, i.e. annotation/indexing and search tool, are presented

    READ-BAD: A New Dataset and Evaluation Scheme for Baseline Detection in Archival Documents

    Full text link
    Text line detection is crucial for any application associated with Automatic Text Recognition or Keyword Spotting. Modern algorithms perform good on well-established datasets since they either comprise clean data or simple/homogeneous page layouts. We have collected and annotated 2036 archival document images from different locations and time periods. The dataset contains varying page layouts and degradations that challenge text line segmentation methods. Well established text line segmentation evaluation schemes such as the Detection Rate or Recognition Accuracy demand for binarized data that is annotated on a pixel level. Producing ground truth by these means is laborious and not needed to determine a method's quality. In this paper we propose a new evaluation scheme that is based on baselines. The proposed scheme has no need for binarization and it can handle skewed as well as rotated text lines. The ICDAR 2017 Competition on Baseline Detection and the ICDAR 2017 Competition on Layout Analysis for Challenging Medieval Manuscripts used this evaluation scheme. Finally, we present results achieved by a recently published text line detection algorithm.Comment: Submitted to DAS201

    Design and Implementation Recognition System for Handwritten Hindi/Marathi Document

    Get PDF
    In the present scenario most of the importance is given for the “paperless office” there by more and more communication and storage of documents is performed digitally. Documents and files which are present in Hindi and Marathi languages that were once stored physically on paper are now being converted into electronic form in order to facilitate quicker additions, searches, and modifications, as well as to prolong the life of such records. Because of this, there is a great demand of such software, which automatically extracts, analyze, recognize and store information from physical documents for later retrieval. Skew detection is used for text line position determination in Digitized documents, automated page orientation, and skew angle detection for binary document images, skew detection in handwritten scripts, in compensation for Internet audio applications and in the correction of scanned documents

    Effective balancing error and user effort in interactive handwriting recognition

    Full text link
    This is the author’s version of a work that was accepted for publication in Pattern Recognition Letters. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition Letters, Volume 37, 1 February 2014, Pages 135–142 DOI 10.1016/j.patrec.2013.03.010[EN] Transcription of handwritten text documents is an expensive and time-consuming task. Unfortunately, the accuracy of current state-of-the-art handwriting recognition systems cannot guarantee fully-automatic high quality transcriptions, so we need to revert to the computer assisted approach. Although this approach reduces the user effort needed to transcribe a given document, the transcription of handwriting text documents still requires complete manual supervision. An especially appealing scenario is the interactive transcription of handwriting documents, in which the user defines the amount of errors that can be tolerated in the final transcribed document. Under this scenario, the transcription of a handwriting text document could be obtained efficiently, supervising only a certain number of incorrectly recognised words. In this work, we develop a new method for predicting the error rate in a block of automatically recognised words, and estimate how much effort is required to correct a transcription to a certain user-defined error rate. The proposed method is included in an interactive approach to transcribing handwritten text documents, which efficiently employs user interactions by means of active and semi-supervised learning techniques, along with a hypothesis recomputation algorithm based on constrained Viterbi search. Transcription results, in terms of trade-off between user effort and transcription accuracy, are reported for two real handwritten documents, and prove the effectiveness of the proposed approach.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under Grant agreement No 287755 (transLectures). Also supported by the EC (FEDER, FSE), the Spanish Government (MICINN, MITyC, "Plan E", under grants MIPRCV "Consolider Ingenio 2010", MITTRAL (TIN2009-14633-C03-01), iTrans2 (TIN2009-14511), and FPU (AP2007-02867), and the Generalitat Valenciana (Grants Prometeo/2009/014 and GV/2010/067). Special thanks to Jesus Andres for his fruitful discussions.Serrano Martinez Santos, N.; Civera Saiz, J.; Sanchis Navarro, JA.; Juan Císcar, A. (2014). Effective balancing error and user effort in interactive handwriting recognition. Pattern Recognition Letters. 37(1):135-142. https://doi.org/10.1016/j.patrec.2013.03.010S13514237

    Indexing of Historical Document Images: Ad Hoc Dewarping Technique for Handwritten Text

    Get PDF
    This work presents a research project, named XDOCS, aimed at extending to a much wider audience the possibility to access a variety of historical documents published on the web. The paper presents an overview of the indexing process that will be used to achieve the goal, focusing on the adopted dewarping technique. The proposed dewarping approach performs its task with the help of a transformation model which maps the projection of a curved surface to a 2D rectangular area. The novelty introduced with this work regards the possibility of applying dewarping to document images which contain both handwritten and typewritten text

    Readability Enhancement and Palimpsest Decipherment of Historical Manuscripts

    Get PDF
    This paper presents image acquisition and readability enhancement techniques for historical manuscripts developed in the interdisciplinary project “The Enigma of the Sinaitic Glagolitic Tradition” (Sinai II Project).1 We are mainly dealing with parchment documents originating from the 10th to the 12th centuries from St. Cather- ine’s Monastery on Mount Sinai. Their contents are being analyzed, fully or partly transcribed and edited in the course of the project. For comparison also other mss. are taken into consideration. The main challenge derives from the fact that some of the manuscripts are in a bad condition due to various damages, e.g. mold, washed out or faded text, etc. or contain palimpsest (=overwritten) parts. Therefore, the manuscripts investigated are imaged with a portable multispectral imaging system. This non-invasive conservation technique has proven extremely useful for the exami- nation and reconstruction of vanished text areas and erased or washed o palimpsest texts. Compared to regular white light, the illumination with speci c wavelengths highlights particular details of the documents, i.e. the writing and writing material, ruling, and underwritten text. In order to further enhance the contrast of the de- graded writings, several Blind Source Separation techniques are applied onto the multispectral images, including Principal Component Analysis (PCA), Independent Component Analysis (ICA) and others. Furthermore, this paper reports on other latest developments in the Sinai II Project, i.e. Document Image Dewarping, Automatic Layout Analysis, the recent result of another project related to our work: the image processing tool Paleo Toolbar, and the launch of the series Glagolitica Sinaitica

    DSS: Synthesizing long Digital Ink using Data augmentation, Style encoding and Split generation

    Full text link
    As text generative models can give increasingly long answers, we tackle the problem of synthesizing long text in digital ink. We show that the commonly used models for this task fail to generalize to long-form data and how this problem can be solved by augmenting the training data, changing the model architecture and the inference procedure. These methods use contrastive learning technique and are tailored specifically for the handwriting domain. They can be applied to any encoder-decoder model that works with digital ink. We demonstrate that our method reduces the character error rate on long-form English data by half compared to baseline RNN and by 16% compared to the previous approach that aims at addressing the same problem. We show that all three parts of the method improve recognizability of generated inks. In addition, we evaluate synthesized data in a human study and find that people perceive most of generated data as real
    corecore