614 research outputs found

    Automatic Document Image Binarization using Bayesian Optimization

    Full text link
    Document image binarization is often a challenging task due to various forms of degradation. Although there exist several binarization techniques in literature, the binarized image is typically sensitive to control parameter settings of the employed technique. This paper presents an automatic document image binarization algorithm to segment the text from heavily degraded document images. The proposed technique uses a two band-pass filtering approach for background noise removal, and Bayesian optimization for automatic hyperparameter selection for optimal results. The effectiveness of the proposed binarization technique is empirically demonstrated on the Document Image Binarization Competition (DIBCO) and the Handwritten Document Image Binarization Competition (H-DIBCO) datasets

    Image Enhancement with Statistical Estimation

    Full text link
    Contrast enhancement is an important area of research for the image analysis. Over the decade, the researcher worked on this domain to develop an efficient and adequate algorithm. The proposed method will enhance the contrast of image using Binarization method with the help of Maximum Likelihood Estimation (MLE). The paper aims to enhance the image contrast of bimodal and multi-modal images. The proposed methodology use to collect mathematical information retrieves from the image. In this paper, we are using binarization method that generates the desired histogram by separating image nodes. It generates the enhanced image using histogram specification with binarization method. The proposed method has showed an improvement in the image contrast enhancement compare with the other image.Comment: 9 pages,6 figures; ISSN:0975-5578 (Online); 0975-5934 (Print

    Recognizing Degraded Handwritten Characters

    Get PDF
    In this paper, Slavonic manuscripts from the 11th century written in Glagolitic script are investigated. State-of-the-art optical character recognition methods produce poor results for degraded handwritten document images. This is largely due to a lack of suitable results from basic pre-processing steps such as binarization and image segmentation. Therefore, a new, binarization-free approach will be presented that is independent of pre-processing deficiencies. It additionally incorporates local information in order to recognize also fragmented or faded characters. The proposed algorithm consists of two steps: character classification and character localization. Firstly scale invariant feature transform features are extracted and classified using support vector machines. On this basis interest points are clustered according to their spatial information. Then, characters are localized and eventually recognized by a weighted voting scheme of pre-classified local descriptors. Preliminary results show that the proposed system can handle highly degraded manuscript images with background noise, e.g. stains, tears, and faded characters

    Unsupervised ensemble of experts (EoE) framework for automatic binarization of document images

    Full text link
    In recent years, a large number of binarization methods have been developed, with varying performance generalization and strength against different benchmarks. In this work, to leverage on these methods, an ensemble of experts (EoE) framework is introduced, to efficiently combine the outputs of various methods. The proposed framework offers a new selection process of the binarization methods, which are actually the experts in the ensemble, by introducing three concepts: confidentness, endorsement and schools of experts. The framework, which is highly objective, is built based on two general principles: (i) consolidation of saturated opinions and (ii) identification of schools of experts. After building the endorsement graph of the ensemble for an input document image based on the confidentness of the experts, the saturated opinions are consolidated, and then the schools of experts are identified by thresholding the consolidated endorsement graph. A variation of the framework, in which no selection is made, is also introduced that combines the outputs of all experts using endorsement-dependent weights. The EoE framework is evaluated on the set of participating methods in the H-DIBCO'12 contest and also on an ensemble generated from various instances of grid-based Sauvola method with promising performance.Comment: 6-page version, Accepted to be presented in ICDAR'1

    Binarization Technique on Historical Documents using Edge Width Detection

    Get PDF
    Document images often suffer from different types of degradation that renders the document image binarization is a challenging task. Document image binarization is of great significance in the document image analysis and recognition process because it affects additional steps of the recognition development. The Comparison of image gradient and image contrast is estimated by the local maximum and minimum which has high quality. So, it is more tolerant to the rough lighting and other types of document degradation such as low contrast images and partially visible images. The distinction between the foreground text and the background text of different document images is a difficult task. This paper presents a new document image binarization technique that focus on these issues using adaptive image contrast. The grouping of the local image contrast and the local image slope is the adaptive image contrast so as to tolerate the text and surroundings distinction caused by dissimilar types of text degradations. In image binarization technique, the construction of adaptive contrast map is done for an input degraded document image which is then adaptively binarized and combined with Canny’s edge detector to recognize the text stroke edge pixels. The document text is advance segmented by means of a local threshold. We try to apply the self-training adaptive binarization approach on existing binarization methods, which improves not only the performance of existing binarization methods, but also the, toughness on different kinds of degraded document images. DOI: 10.17762/ijritcc2321-8169.15066

    Enhancement of Historical Printed Document Images by Combining Total Variation Regularization and Non-Local Means Filtering

    Get PDF
    This paper proposes a novel method for document enhancement which combines two recent powerful noise-reduction steps. The first step is based on the total variation framework. It flattens background grey-levels and produces an intermediate image where background noise is considerably reduced. This image is used as a mask to produce an image with a cleaner background while keeping character details. The second step is applied to the cleaner image and consists of a filter based on non-local means: character edges are smoothed by searching for similar patch images in pixel neighborhoods. The document images to be enhanced are real historical printed documents from several periods which include several defects in their background and on character edges. These defects result from scanning, paper aging and bleed- through. The proposed method enhances document images by combining the total variation and the non-local means techniques in order to improve OCR recognition. The method is shown to be more powerful than when these techniques are used alone and than other enhancement methods
    corecore