126 research outputs found

    DeepOtsu: Document Enhancement and Binarization using Iterative Deep Learning

    Get PDF
    This paper presents a novel iterative deep learning framework and apply it for document enhancement and binarization. Unlike the traditional methods which predict the binary label of each pixel on the input image, we train the neural network to learn the degradations in document images and produce the uniform images of the degraded input images, which allows the network to refine the output iteratively. Two different iterative methods have been studied in this paper: recurrent refinement (RR) which uses the same trained neural network in each iteration for document enhancement and stacked refinement (SR) which uses a stack of different neural networks for iterative output refinement. Given the learned uniform and enhanced image, the binarization map can be easy to obtain by a global or local threshold. The experimental results on several public benchmark data sets show that our proposed methods provide a new clean version of the degraded image which is suitable for visualization and promising results of binarization using the global Otsu's threshold based on the enhanced images learned iteratively by the neural network.Comment: Accepted by Pattern Recognitio

    Bleed-through removal in degraded documents

    Full text link

    Image speech combination for interactive computer assisted transcription of handwritten documents

    Full text link
    [EN] Handwritten document transcription aims to obtain the contents of a document to provide efficient information access to, among other, digitised historical documents. The increasing number of historical documents published by libraries and archives makes this an important task. In this context, the use of image processing and understanding techniques in conjunction with assistive technologies reduces the time and human effort required for obtaining the final perfect transcription. The assistive transcription system proposes a hypothesis, usually derived from a recognition process of the handwritten text image. Then, the professional transcriber feedback can be used to obtain an improved hypothesis and speed-up the final transcription. In this framework, a speech signal corresponding to the dictation of the handwritten text can be used as an additional source of information. This multimodal approach, that combines the image of the handwritten text with the speech of the dictation of its contents, could make better the hypotheses (initial and improved) offered to the transcriber. In this paper we study the feasibility of a multimodal interactive transcription system for an assistive paradigm known as Computer Assisted Transcription of Text Images. Different techniques are tested for obtaining the multimodal combination in this framework. The use of the proposed multimodal approach reveals a significant reduction of transcription effort with some multimodal combination techniques, allowing for a faster transcription process.Work partially supported by projects READ-674943 (European Union's H2020), SmartWays-RTC-2014-1466-4 (MINECO, Spain), and CoMUN-HaT-TIN2015-70924-C2-1-R (MINECO/FEDER), and by Generalitat Valenciana (GVA), Spain under reference PROMETEOII/2014/030.Granell, E.; Romero, V.; Martínez-Hinarejos, C. (2019). Image speech combination for interactive computer assisted transcription of handwritten documents. Computer Vision and Image Understanding. 180:74-83. https://doi.org/10.1016/j.cviu.2019.01.009S748318

    Text Line Segmentation of Historical Documents: a Survey

    Full text link
    There is a huge amount of historical documents in libraries and in various National Archives that have not been exploited electronically. Although automatic reading of complete pages remains, in most cases, a long-term objective, tasks such as word spotting, text/image alignment, authentication and extraction of specific fields are in use today. For all these tasks, a major step is document segmentation into text lines. Because of the low quality and the complexity of these documents (background noise, artifacts due to aging, interfering lines),automatic text line segmentation remains an open research field. The objective of this paper is to present a survey of existing methods, developed during the last decade, and dedicated to documents of historical interest.Comment: 25 pages, submitted version, To appear in International Journal on Document Analysis and Recognition, On line version available at http://www.springerlink.com/content/k2813176280456k3

    Occode: an end-to-end machine learning pipeline for transcription of historical population censuses

    Get PDF
    Machine learning approaches achieve high accuracy for text recognition and are therefore increasingly used for the transcription of handwritten historical sources. However, using machine learning in production requires a streamlined end-to-end machine learning pipeline that scales to the dataset size, and a model that achieves high accuracy with few manual transcriptions. In addition, the correctness of the model results must be verified. This paper describes our lessons learned developing, tuning, and using the Occode end-to-end machine learning pipeline for transcribing 7,3 million rows with handwritten occupation codes in the Norwegian 1950 population census. We achieve an accuracy of 97% for the automatically transcribed codes, and we send 3% of the codes for manual verification. We verify that the occupation code distribution found in our result matches the distribution found in our training data which should be representative for the census as a whole. We believe our approach and lessons learned are useful for other transcription projects that plan to use machine learning in production. The source code is available at: https://github.com/uit-hdl/rhd-code

    Handwritten Text Recognition for Historical Documents in the tranScriptorium Project

    Full text link
    ""© Owner/Author 2014. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM, In Proceedings of the First International Conference on Digital Access to Textual Cultural Heritage (pp. 111-117) http://dx.doi.org/10.1145/2595188.2595193Transcription of historical handwritten documents is a crucial problem for making easier the access to these documents to the general public. Currently, huge amount of historical handwritten documents are being made available by on-line portals worldwide. It is not realistic to obtain the transcription of these documents manually, and therefore automatic techniques has to be used. tranScriptorium is a project that aims at researching on modern Handwritten Text Recognition (HTR) technology for transcribing historical handwritten documents. The HTR technology used in tranScriptorium is based on models that are learnt automatically from examples. This HTR technology has been used on a Dutch collection from 15th century selected for the tranScriptorium project. This paper provides preliminary HTR results on this Dutch collection that are very encouraging, taken into account that minimal resources have been deployed to develop the transcription system.The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 600707 - tranScriptorium and the Spanish MEC under the STraDa (TIN2012-37475-C02-01) research project.Sánchez Peiró, JA.; Bosch Campos, V.; Romero Gómez, V.; Depuydt, K.; De Does, J. (2014). Handwritten Text Recognition for Historical Documents in the tranScriptorium Project. ACM. https://doi.org/10.1145/2595188.2595193

    Handwritten Text Line Detection and Classification based on HMMs

    Full text link
    [ES] En este trabajo presentamos una forma para realizar el análisis y la detección de líneas de texto en documentos manuscritos basada en los Modelos Ocultos de Markov, una técnica ampliamente utilizada en otras tareas del reconocimiento del texto manuscrito y del habla. Mostamos que el análisis y la detección de líneas de texto puede realizarse utilizando metodologías más formales en contraposición a los métodos heurístics que se pueden encontrar en la literatura. Nuestro método no solo proporciona las mejores coordenas de posición para cada una de las regiones verticales de la página sino que también las etiqueta, de esta manera superando los métodos heurísticos tradicionales. En nuestros experimentos demonstramos el rendimiento de nuestro método ( tanto en detección como en classificación de líneas) y estudiamos el impacto de incrementalmente restringidos "lenguajes de estructuración vertical de páginas" y modelos morfológicos sobre la precisión de detección y clasificación. Mediante esta experimentación también demostramos la mejora en calidad de las líneas base generadas por nuestro método en comparación con un método heurístico estado del arte basado en perfiles de proyección vertical.[EN] In this paper we present an approach for text line analysis and detection in handwritten documents based on Hidden Markov Models, a technique widely used in other handwritten and speech recognition tasks. It is shown that text line analysis and detection can be solved using a more formal methodology in contraposition to most of the proposed heuristic approaches found in the literature. Our approach not only provides the best position coordinates for each of the vertical page regions but also labels them, in this manner surpassing the traditional heuristic methods. In our experiments we demonstrate the performance of the approach (both in line analysis and detection) and study the impact of increasingly constrained ¿vertical layout language models¿ and morphologic models on text line detection and classification accuracy. Through this experimentation we also show the improvement in quality of the baselines yielded by our approach in comparisonwith a state-of-the-art heuristic method based on vertical projection profiles.Bosch Campos, V. (2012). Handwritten Text Line Detection and Classification based on HMMs. http://hdl.handle.net/10251/17964Archivo delegad

    Information Preserving Processing of Noisy Handwritten Document Images

    Get PDF
    Many pre-processing techniques that normalize artifacts and clean noise induce anomalies due to discretization of the document image. Important information that could be used at later stages may be lost. A proposed composite-model framework takes into account pre-printed information, user-added data, and digitization characteristics. Its benefits are demonstrated by experiments with statistically significant results. Separating pre-printed ruling lines from user-added handwriting shows how ruling lines impact people\u27s handwriting and how they can be exploited for identifying writers. Ruling line detection based on multi-line linear regression reduces the mean error of counting them from 0.10 to 0.03, 6.70 to 0.06, and 0.13 to 0.02, com- pared to an HMM-based approach on three standard test datasets, thereby reducing human correction time by 50%, 83%, and 72% on average. On 61 page images from 16 rule-form templates, the precision and recall of form cell recognition are increased by 2.7% and 3.7%, compared to a cross-matrix approach. Compensating for and exploiting ruling lines during feature extraction rather than pre-processing raises the writer identification accuracy from 61.2% to 67.7% on a 61-writer noisy Arabic dataset. Similarly, counteracting page-wise skew by subtracting it or transforming contours in a continuous coordinate system during feature extraction improves the writer identification accuracy. An implementation study of contour-hinge features reveals that utilizing the full probabilistic probability distribution function matrix improves the writer identification accuracy from 74.9% to 79.5%

    Multimodal Interactive Transcription of Handwritten Text Images

    Full text link
    En esta tesis se presenta un nuevo marco interactivo y multimodal para la transcripción de Documentos manuscritos. Esta aproximación, lejos de proporcionar la transcripción completa pretende asistir al experto en la dura tarea de transcribir. Hasta la fecha, los sistemas de reconocimiento de texto manuscrito disponibles no proporcionan transcripciones aceptables por los usuarios y, generalmente, se requiere la intervención del humano para corregir las transcripciones obtenidas. Estos sistemas han demostrado ser realmente útiles en aplicaciones restringidas y con vocabularios limitados (como es el caso del reconocimiento de direcciones postales o de cantidades numéricas en cheques bancarios), consiguiendo en este tipo de tareas resultados aceptables. Sin embargo, cuando se trabaja con documentos manuscritos sin ningún tipo de restricción (como documentos manuscritos antiguos o texto espontáneo), la tecnología actual solo consigue resultados inaceptables. El escenario interactivo estudiado en esta tesis permite una solución más efectiva. En este escenario, el sistema de reconocimiento y el usuario cooperan para generar la transcripción final de la imagen de texto. El sistema utiliza la imagen de texto y una parte de la transcripción previamente validada (prefijo) para proponer una posible continuación. Despues, el usuario encuentra y corrige el siguente error producido por el sistema, generando así un nuevo prefijo mas largo. Este nuevo prefijo, es utilizado por el sistema para sugerir una nueva hipótesis. La tecnología utilizada se basa en modelos ocultos de Markov y n-gramas. Estos modelos son utilizados aquí de la misma manera que en el reconocimiento automático del habla. Algunas modificaciones en la definición convencional de los n-gramas han sido necesarias para tener en cuenta la retroalimentación del usuario en este sistema.Romero Gómez, V. (2010). Multimodal Interactive Transcription of Handwritten Text Images [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8541Palanci

    Visual image processing in various representation spaces for documentary preservation

    Get PDF
    This thesis establishes an advanced image processing framework for the enhancement and restoration of historical document images (HDI) in both intensity (gray-scale or color) and multispectral (MS) representation spaces. It provides three major contributions: 1) the binarization of gray-scale HDI; 2) the visual quality restoration of MS HDI; and 3) automatic reference data (RD) estimation for HDI binarization. HDI binarization is one of the enhancement techniques that produces bi-level information which is easy to handle using methods of analysis (OCR, for instance) and is less computationally costly to process than 256 levels of grey or color images. Restoring the visual quality of HDI in an MS representation space enhances their legibility, which is not possible with conventional intensity-based restoration methods, and HDI legibility is the main concern of historians and librarians wishing to transfer knowledge and revive ancient cultural heritage. The use of MS imaging systems is a new and attractive research trend in the field of numerical processing of cultural heritage documents. In this thesis, these systems are also used for automatically estimating more accurate RD to be used for the evaluation of HDI binarization algorithms in order to track the level of human performance. Our first contribution, which is a new adaptive method of intensity-based binarization, is defined at the outset. Since degradation is present over document images, binarization methods must be adapted to handle degradation phenomena locally. Unfortunately, these methods are not effective, as they are not able to capture weak text strokes, which results in a deterioration of the performance of character recognition engines. The proposed approach first detects a subset of the most probable text pixels, which are used to locally estimate the parameters of the two classes of pixels (text and background), and then performs a simple maximum likelihood (ML) to locally classify the remaining pixels based on their class membership. To the best of our knowledge, this is the first time local parameter estimation and classification in an ML framework has been introduced for HDI binarization with promising results. A limitation of this method in the case with as the intensity-based methods of enhancement is that they are not effective in dealing with severely degraded HDI. Developing more advanced methods based on MS information would be a promising alternative avenue of research. In the second contribution, a novel approach to the visual restoration of HDI is defined. The approach is aimed at providing end users (historians, librarians, etc..) with better HDI visualization, specifically; it aims to restore them from degradations, while keeping the original appearance of the HDI intact. Practically, this problem cannot be solved by conventional intensity-based restoration methods. To cope with these limitations, MS imaging is used to produce additional spectral images in the invisible light (infrared and ultraviolet) range, which gives greater contrast to objects in the documents. The inpainting-based variational framework proposed here for HDI restoration involves isolating the degradation phenomena in the infrared spectral images, and then inpainting them in the visible spectral images. The final color image to visualize is therefore reconstructed from the restored visible spectral images. To the best of our knowledge, this is the first time the inpainting technique has been introduced for MS HDI. The experimental results are promising, and our objective, in collaboration with the BAnQ (Bibliothèque et Archives nationales de Québec), is to push heritage documents into the public domain and build an intelligent engine for accessing them. It is useful to note that the proposed model can be extended to other MS-based image processing tasks. Our third contribution is presented, which is to consider a new problem of RD (reference data) estimation, in order to show the importance of working with MS images rather than gray-scale or color images. RDs are mandatory for comparing different binarization algorithms, and they are usually generated by an expert. However, an expert’s RD is always subject to mislabeling and judgment errors, especially in the case of degraded data in restricted representation spaces (gray-scale or color images). In the proposed method, multiple RD generated by several experts are used in combination with MS HDI to estimate new, more accurate RD. The idea is to include the agreement of experts about labels and the multivariate data fidelity in a single Bayesian classification framework to estimate the a posteriori probability of new labels forming the final estimated RD. Our experiments show that estimated RD are more accurate than an expert’s RD. To the best of our knowledge, no similar work to combine binary data and multivariate data for the estimation of RD has been conducted
    corecore