9,453 research outputs found

    DeepOtsu: Document Enhancement and Binarization using Iterative Deep Learning

    Get PDF
    This paper presents a novel iterative deep learning framework and apply it for document enhancement and binarization. Unlike the traditional methods which predict the binary label of each pixel on the input image, we train the neural network to learn the degradations in document images and produce the uniform images of the degraded input images, which allows the network to refine the output iteratively. Two different iterative methods have been studied in this paper: recurrent refinement (RR) which uses the same trained neural network in each iteration for document enhancement and stacked refinement (SR) which uses a stack of different neural networks for iterative output refinement. Given the learned uniform and enhanced image, the binarization map can be easy to obtain by a global or local threshold. The experimental results on several public benchmark data sets show that our proposed methods provide a new clean version of the degraded image which is suitable for visualization and promising results of binarization using the global Otsu's threshold based on the enhanced images learned iteratively by the neural network.Comment: Accepted by Pattern Recognitio

    Image Enhancement with Statistical Estimation

    Full text link
    Contrast enhancement is an important area of research for the image analysis. Over the decade, the researcher worked on this domain to develop an efficient and adequate algorithm. The proposed method will enhance the contrast of image using Binarization method with the help of Maximum Likelihood Estimation (MLE). The paper aims to enhance the image contrast of bimodal and multi-modal images. The proposed methodology use to collect mathematical information retrieves from the image. In this paper, we are using binarization method that generates the desired histogram by separating image nodes. It generates the enhanced image using histogram specification with binarization method. The proposed method has showed an improvement in the image contrast enhancement compare with the other image.Comment: 9 pages,6 figures; ISSN:0975-5578 (Online); 0975-5934 (Print

    Historical Document Enhancement Using LUT Classification

    Get PDF
    The fast evolution of scanning and computing technologies in recent years has led to the creation of large collections of scanned historical documents. It is almost always the case that these scanned documents suffer from some form of degradation. Large degradations make documents hard to read and substantially deteriorate the performance of automated document processing systems. Enhancement of degraded document images is normally performed assuming global degradation models. When the degradation is large, global degradation models do not perform well. In contrast, we propose to learn local degradation models and use them in enhancing degraded document images. Using a semi-automated enhancement system, we have labeled a subset of the Frieder diaries collection (The diaries of Rabbi Dr. Avraham Abba Frieder. http://ir.iit.edu/collections/). This labeled subset was then used to train classifiers based on lookup tables in conjunction with the approximated nearest neighbor algorithm. The resulting algorithm is highly efficient and effective. Experimental evaluation results are provided using the Frieder diaries collection (The diaries of Rabbi Dr. Avraham Abba Frieder. http://ir.iit.edu/collections/). Ā© Springer-Verlag 2009

    Medical image enhancement using threshold decomposition driven adaptive morphological filter

    Get PDF
    One of the most common degradations in medical images is their poor contrast quality. This suggests the use of contrast enhancement methods as an attempt to modify the intensity distribution of the image. In this paper, a new edge detected morphological filter is proposed to sharpen digital medical images. This is done by detecting the positions of the edges and then applying a class of morphological filtering. Motivated by the success of threshold decomposition, gradientbased operators are used to detect the locations of the edges. A morphological filter is used to sharpen these detected edges. Experimental results demonstrate that the detected edge deblurring filter improved the visibility and perceptibility of various embedded structures in digital medical images. Moreover, the performance of the proposed filter is superior to that of other sharpener-type filters

    Multi-State Image Restoration by Transmission of Bit-Decomposed Data

    Get PDF
    We report on the restoration of gray-scale image when it is decomposed into a binary form before transmission. We assume that a gray-scale image expressed by a set of Q-Ising spins is first decomposed into an expression using Ising (binary) spins by means of the threshold division, namely, we produce (Q-1) binary Ising spins from a Q-Ising spin by the function F(\sigma_i - m) = 1 if the input data \sigma_i \in {0,.....,Q-1} is \sigma_i \geq m and 0 otherwise, where m \in {1,....,Q-1} is the threshold value. The effects of noise are different from the case where the raw Q-Ising values are sent. We investigate which is more effective to use the binary data for transmission or to send the raw Q-Ising values. By using the mean-field model, we first analyze the performance of our method quantitatively. Then we obtain the static and dynamical properties of restoration using the bit-decomposed data. In order to investigate what kind of original picture is efficiently restored by our method, the standard image in two dimensions is simulated by the mean-field annealing, and we compare the performance of our method with that using the Q-Ising form. We show that our method is more efficient than the one using the Q-Ising form when the original picture has large parts in which the nearest neighboring pixels take close values.Comment: latex 24 pages using REVTEX, 10 figures, 4 table

    Recognizing Degraded Handwritten Characters

    Get PDF
    In this paper, Slavonic manuscripts from the 11th century written in Glagolitic script are investigated. State-of-the-art optical character recognition methods produce poor results for degraded handwritten document images. This is largely due to a lack of suitable results from basic pre-processing steps such as binarization and image segmentation. Therefore, a new, binarization-free approach will be presented that is independent of pre-processing deficiencies. It additionally incorporates local information in order to recognize also fragmented or faded characters. The proposed algorithm consists of two steps: character classification and character localization. Firstly scale invariant feature transform features are extracted and classified using support vector machines. On this basis interest points are clustered according to their spatial information. Then, characters are localized and eventually recognized by a weighted voting scheme of pre-classified local descriptors. Preliminary results show that the proposed system can handle highly degraded manuscript images with background noise, e.g. stains, tears, and faded characters
    • ā€¦
    corecore