22,966 research outputs found

    Binarisation Algorithms Analysis on Document and Natural Scene Images

    Get PDF
    The binarisation plays an important role in a system for text extraction from images which is a prominent area in digital image processing. The primary goal of the binarisation techniques are to covert colored and gray scale image into black and white image so that overall computational overhead can be minimized. It has great impact on performance of the system for text extraction from image. Such system has number of applications like navigation system for visually impaired persons, automatic text extraction from document images, and number plate detection to enforcement traffic rules etc. The present study analysed the performance of well known binarisation algorithms on degraded documents and camera captured images. The statistical parameters namely Precession, Recall and F-measure and PSNR are used to evaluate the performance. To find the suitability of the binarisation method for text preservation in natural scene images, we have also considered visual observation DOI: 10.17762/ijritcc2321-8169.15083

    Subjective and objective quality assessment of ancient degraded documents

    Get PDF
    Archiving, restoration and analysis of damaged manuscripts have been largely increased in recent decades. Usually, these documents are physically degraded because of aging and improper handing. They also cannot be processed manually because a massive volume of these documents exist in libraries and archives around the world. Therefore, automatic methodologies are needed to preserve and to process their content. These documents are usually processed through their images. Degraded document image processing is a difficult task mainly because of the existing physical degradations. While it can be very difficult to accurately locate and remove such distortions, analyzing the severity and type(s) of these distortions is feasible. This analysis provides useful information on the type and severity of degradations with a number of applications. The main contributions of this thesis are to propose models for objectively assessing the physical condition of document images and to classify their degradations. In this thesis, three datasets of degraded document images along with the subjective ratings for each image are developed. In addition, three no-reference document image quality assessment (NR-DIQA) metrics are proposed for historical and medieval document images. It should be mentioned that degraded medieval document images are a subset of the historical document images and may contain both graphical and textual content. Finally, we propose a degradation classification model in order to identify common distortion types in old document images. Essentially, existing no reference image quality assessment (NR-IQA) metrics are not designed to assess physical document distortions. In the first contribution, we propose the first dataset of degraded document images along with the human opinion scores for each document image. This dataset is introduced to evaluate the quality of historical document images. We also propose an objective NR-DIQA metric based on the statistics of the mean subtracted contrast normalized (MSCN) coefficients computed from segmented layers of each document image. The segmentation into four layers of foreground and background is done based on an analysis of the log-Gabor filters. This segmentation is based on the assumption that the sensitivity of the human visual system (HVS) is different at the locations of text and non-text. Experimental results show that the proposed metric has comparable or better performance than the state-of-the-art metrics, while it has a moderate complexity. Degradation identification and quality assessment can complement each other to provide information on both type and severity of degradations in document images. Therefore, we introduced, in the second contribution, a multi-distortion historical document image database that can be used for the research on quality assessment of degraded documents as well as degradation classification. The developed dataset contains historical document images which are classified into four categories based on their distortion types, namely, paper translucency, stain, readers’ annotations, and worn holes. An efficient NR-DIQA metric is then proposed based on three sets of spatial and frequency image features extracted from two layers of text and non-text. In addition, these features are used to estimate the probability of the four aforementioned physical distortions for the first time in the literature. Both proposed quality assessment and degradation classification models deliver a very promising performance. Finally, we develop in the third contribution a dataset and a quality assessment metric for degraded medieval document (DMD) images. This type of degraded images contains both textual and pictorial information. The introduced DMD dataset is the first dataset in its category that also provides human ratings. Also, we propose a new no-reference metric in order to evaluate the quality of DMD images in the developed dataset. The proposed metric is based on the extraction of several statistical features from three layers of text, non-text, and graphics. The segmentation is based on color saliency with assumption that pictorial parts are colorful. It also follows HVS that gives different weights to each layer. The experimental results validate the effectiveness of the proposed NR-DIQA strategy for DMD images

    Deep Unrestricted Document Image Rectification

    Full text link
    In recent years, tremendous efforts have been made on document image rectification, but existing advanced algorithms are limited to processing restricted document images, i.e., the input images must incorporate a complete document. Once the captured image merely involves a local text region, its rectification quality is degraded and unsatisfactory. Our previously proposed DocTr, a transformer-assisted network for document image rectification, also suffers from this limitation. In this work, we present DocTr++, a novel unified framework for document image rectification, without any restrictions on the input distorted images. Our major technical improvements can be concluded in three aspects. Firstly, we upgrade the original architecture by adopting a hierarchical encoder-decoder structure for multi-scale representation extraction and parsing. Secondly, we reformulate the pixel-wise mapping relationship between the unrestricted distorted document images and the distortion-free counterparts. The obtained data is used to train our DocTr++ for unrestricted document image rectification. Thirdly, we contribute a real-world test set and metrics applicable for evaluating the rectification quality. To our best knowledge, this is the first learning-based method for the rectification of unrestricted document images. Extensive experiments are conducted, and the results demonstrate the effectiveness and superiority of our method. We hope our DocTr++ will serve as a strong baseline for generic document image rectification, prompting the further advancement and application of learning-based algorithms. The source code and the proposed dataset are publicly available at https://github.com/fh2019ustc/DocTr-Plus

    Restoration and segmentation of machine printed documents.

    Get PDF
    OCR (Optical Character Recognition) has been confronted with the problems of recognizing degraded document images such as text overlapping with non-text symbols, touching characters, etc. The recognition rate for those degraded document images will become unacceptable or completely fail if pre-processing algorithms are not performed before segmentation recognition algorithms are applied. Therefore, the principle objective of this thesis is to develop effective algorithms for tackling those problems in the field of document analysis. We focus our efforts only on the following aspects: 1. A morphological approach has been developed to extract text strings from regular periodic overlapping text/background images, since most OCR systems can only read traditional characters: black characters on a uniform white background, or vice versa. The proposed algorithms that perform text character extraction accommodate document images that contain various kinds of periodically distributed background symbols. The underlying strategy of the algorithms is to maximize background component removal while minimizing the shape distortion of text characters by using appropriate morphological operations. 2. Real-world images, which are frequently degraded due to human induced interference strokes, are inadequate for processing by document analysis systems. In order to process those document images, containing handwritten interference marks which do not possess the periodical property, a new algorithm combining a thinning technique and orientation attributes of connected components has been developed to effectively segment handwritten interference strokes. Morphological operations based on orientation map and skeleton images are used to successfully prevent the flooding water effect of conventional morphological operations for removing interference strokes. 3. Segmenting a word into its character components is one of the most critical steps in document recognition systems. Any failures and errors in this segmentation step can lead to a critical loss of information from documents. In this thesis, we propose new algorithms for resolving the ambiguities in segmenting touching characters. A modified segmentation discrimination function is presented for segmenting touching characters based on the pixel projection and profile projection. A dynamic recursive segmentation algorithm has been developed to effectively search for correct cutting points in touching character components. Based on 12 pages of NEWSLINE , the University of Windsor\u27s publication, a 99.6% character recognition accuracy has been achieved.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis1996 .L52. Source: Dissertation Abstracts International, Volume: 59-08, Section: B, page: 4336. Advisers: M. Ahmadi; M. Shridhar. Thesis (Ph.D.)--University of Windsor (Canada), 1996

    Recognizing Degraded Handwritten Characters

    Get PDF
    In this paper, Slavonic manuscripts from the 11th century written in Glagolitic script are investigated. State-of-the-art optical character recognition methods produce poor results for degraded handwritten document images. This is largely due to a lack of suitable results from basic pre-processing steps such as binarization and image segmentation. Therefore, a new, binarization-free approach will be presented that is independent of pre-processing deficiencies. It additionally incorporates local information in order to recognize also fragmented or faded characters. The proposed algorithm consists of two steps: character classification and character localization. Firstly scale invariant feature transform features are extracted and classified using support vector machines. On this basis interest points are clustered according to their spatial information. Then, characters are localized and eventually recognized by a weighted voting scheme of pre-classified local descriptors. Preliminary results show that the proposed system can handle highly degraded manuscript images with background noise, e.g. stains, tears, and faded characters

    On-the-fly Historical Handwritten Text Annotation

    Full text link
    The performance of information retrieval algorithms depends upon the availability of ground truth labels annotated by experts. This is an important prerequisite, and difficulties arise when the annotated ground truth labels are incorrect or incomplete due to high levels of degradation. To address this problem, this paper presents a simple method to perform on-the-fly annotation of degraded historical handwritten text in ancient manuscripts. The proposed method aims at quick generation of ground truth and correction of inaccurate annotations such that the bounding box perfectly encapsulates the word, and contains no added noise from the background or surroundings. This method will potentially be of help to historians and researchers in generating and correcting word labels in a document dynamically. The effectiveness of the annotation method is empirically evaluated on an archival manuscript collection from well-known publicly available datasets

    Unsupervised Text Extraction from G-Maps

    Full text link
    This paper represents an text extraction method from Google maps, GIS maps/images. Due to an unsupervised approach there is no requirement of any prior knowledge or training set about the textual and non-textual parts. Fuzzy CMeans clustering technique is used for image segmentation and Prewitt method is used to detect the edges. Connected component analysis and gridding technique enhance the correctness of the results. The proposed method reaches 98.5% accuracy level on the basis of experimental data sets.Comment: Proc. IEEE Conf. #30853, International Conference on Human Computer Interactions (ICHCI'13), Chennai, India, 23-24 Aug., 201
    corecore