1,305 research outputs found
Three-stage binarization of color document images based on discrete wavelet transform and generative adversarial networks
The efficient segmentation of foreground text information from the background
in degraded color document images is a hot research topic. Due to the imperfect
preservation of ancient documents over a long period of time, various types of
degradation, including staining, yellowing, and ink seepage, have seriously
affected the results of image binarization. In this paper, a three-stage method
is proposed for image enhancement and binarization of degraded color document
images by using discrete wavelet transform (DWT) and generative adversarial
network (GAN). In Stage-1, we use DWT and retain the LL subband images to
achieve the image enhancement. In Stage-2, the original input image is split
into four (Red, Green, Blue and Gray) single-channel images, each of which
trains the independent adversarial networks. The trained adversarial network
models are used to extract the color foreground information from the images. In
Stage-3, in order to combine global and local features, the output image from
Stage-2 and the original input image are used to train the independent
adversarial networks for document binarization. The experimental results
demonstrate that our proposed method outperforms many classical and
state-of-the-art (SOTA) methods on the Document Image Binarization Contest
(DIBCO) dataset. We release our implementation code at
https://github.com/abcpp12383/ThreeStageBinarization
Persian Heritage Image Binarization Competition (PHIBC 2012)
The first competition on the binarization of historical Persian documents and
manuscripts (PHIBC 2012) has been organized in conjunction with the first
Iranian conference on pattern recognition and image analysis (PRIA 2013). The
main objective of PHIBC 2012 is to evaluate performance of the binarization
methodologies, when applied on the Persian heritage images. This paper provides
a report on the methodology and performance of the three submitted algorithms
based on evaluation measures has been used.Comment: 4 pages, 2 figures, conferenc
DeepOtsu: Document Enhancement and Binarization using Iterative Deep Learning
This paper presents a novel iterative deep learning framework and apply it
for document enhancement and binarization. Unlike the traditional methods which
predict the binary label of each pixel on the input image, we train the neural
network to learn the degradations in document images and produce the uniform
images of the degraded input images, which allows the network to refine the
output iteratively. Two different iterative methods have been studied in this
paper: recurrent refinement (RR) which uses the same trained neural network in
each iteration for document enhancement and stacked refinement (SR) which uses
a stack of different neural networks for iterative output refinement. Given the
learned uniform and enhanced image, the binarization map can be easy to obtain
by a global or local threshold. The experimental results on several public
benchmark data sets show that our proposed methods provide a new clean version
of the degraded image which is suitable for visualization and promising results
of binarization using the global Otsu's threshold based on the enhanced images
learned iteratively by the neural network.Comment: Accepted by Pattern Recognitio
A Multiple-Expert Binarization Framework for Multispectral Images
In this work, a multiple-expert binarization framework for multispectral
images is proposed. The framework is based on a constrained subspace selection
limited to the spectral bands combined with state-of-the-art gray-level
binarization methods. The framework uses a binarization wrapper to enhance the
performance of the gray-level binarization. Nonlinear preprocessing of the
individual spectral bands is used to enhance the textual information. An
evolutionary optimizer is considered to obtain the optimal and some suboptimal
3-band subspaces from which an ensemble of experts is then formed. The
framework is applied to a ground truth multispectral dataset with promising
results. In addition, a generalization to the cross-validation approach is
developed that not only evaluates generalizability of the framework, it also
provides a practical instance of the selected experts that could be then
applied to unseen inputs despite the small size of the given ground truth
dataset.Comment: 12 pages, 8 figures, 6 tables. Presented at ICDAR'1
Automatic Palaeographic Exploration of Genizah Manuscripts
The Cairo Genizah is a collection of hand-written documents containing approximately
350,000 fragments of mainly Jewish texts discovered in the late 19th
century. The
fragments are today spread out in some 75 libraries and private collections worldwide,
but there is an ongoing effort to document and catalogue all extant fragments.
Palaeographic information plays a key role in the study of the Genizah collection.
Script style, and–more specifically–handwriting, can be used to identify fragments that
might originate from the same original work. Such matched fragments, commonly
referred to as “joins”, are currently identified manually by experts, and presumably only
a small fraction of existing joins have been discovered to date. In this work, we show
that automatic handwriting matching functions, obtained from non-specific features
using a corpus of writing samples, can perform this task quite reliably. In addition, we
explore the problem of grouping various Genizah documents by script style, without
being provided any prior information about the relevant styles. The automatically
obtained grouping agrees, for the most part, with the palaeographic taxonomy. In cases
where the method fails, it is due to apparent similarities between related scripts
- …