8 research outputs found
Persian Heritage Image Binarization Competition (PHIBC 2012)
The first competition on the binarization of historical Persian documents and
manuscripts (PHIBC 2012) has been organized in conjunction with the first
Iranian conference on pattern recognition and image analysis (PRIA 2013). The
main objective of PHIBC 2012 is to evaluate performance of the binarization
methodologies, when applied on the Persian heritage images. This paper provides
a report on the methodology and performance of the three submitted algorithms
based on evaluation measures has been used.Comment: 4 pages, 2 figures, conferenc
A Multiple-Expert Binarization Framework for Multispectral Images
In this work, a multiple-expert binarization framework for multispectral
images is proposed. The framework is based on a constrained subspace selection
limited to the spectral bands combined with state-of-the-art gray-level
binarization methods. The framework uses a binarization wrapper to enhance the
performance of the gray-level binarization. Nonlinear preprocessing of the
individual spectral bands is used to enhance the textual information. An
evolutionary optimizer is considered to obtain the optimal and some suboptimal
3-band subspaces from which an ensemble of experts is then formed. The
framework is applied to a ground truth multispectral dataset with promising
results. In addition, a generalization to the cross-validation approach is
developed that not only evaluates generalizability of the framework, it also
provides a practical instance of the selected experts that could be then
applied to unseen inputs despite the small size of the given ground truth
dataset.Comment: 12 pages, 8 figures, 6 tables. Presented at ICDAR'1
A generalization of Otsu method for linear separation of two unbalanced classes in document image binarization
The classical Otsu method is a common tool in document image binarization. Often, two classes, text and background, are imbalanced, which means that the assumption of the classical Otsu method is not met. In this work, we considered the imbalanced pixel classes of background and text: weights of two classes are different, but variances are the same. We experimentally demonstrated that the employment of a criterion that takes into account the imbalance of the classes' weights, allows attaining higher binarization accuracy. We described the generalization of the criteria for a two-parametric model, for which an algorithm for the optimal linear separation search via fast linear clustering was proposed. We also demonstrated that the two-parametric model with the proposed separation allows increasing the image binarization accuracy for the documents with a complex background or spots.We are grateful for the insightful comments offered by D.P. Nikolaev. This research was partially supported by the Russian Foundation for Basic Research No. 19-29-09066 and 18-07-01387
A selectional auto-encoder approach for document image binarization
Binarization plays a key role in the automatic information retrieval from document images. This process is usually performed in the first stages of document analysis systems, and serves as a basis for subsequent steps. Hence it has to be robust in order to allow the full analysis workflow to be successful. Several methods for document image binarization have been proposed so far, most of which are based on hand-crafted image processing strategies. Recently, Convolutional Neural Networks have shown an amazing performance in many disparate duties related to computer vision. In this paper we discuss the use of convolutional auto-encoders devoted to learning an end-to-end map from an input image to its selectional output, in which activations indicate the likelihood of pixels to be either foreground or background. Once trained, documents can therefore be binarized by parsing them through the model and applying a global threshold. This approach has proven to outperform existing binarization strategies in a number of document types.This work was partially supported by the Social Sciences and Humanities Research Council of Canada, the Spanish Ministerio de Ciencia, Innovación y Universidades through Juan de la Cierva - Formación grant (Ref. FJCI-2016-27873), and the Universidad de Alicante through grant GRE-16-04
BiNet:Degraded-Manuscript Binarization in Diverse Document Textures and Layouts using Deep Encoder-Decoder Networks
Handwritten document-image binarization is a semantic segmentation process to
differentiate ink pixels from background pixels. It is one of the essential
steps towards character recognition, writer identification, and script-style
evolution analysis. The binarization task itself is challenging due to the vast
diversity of writing styles, inks, and paper materials. It is even more
difficult for historical manuscripts due to the aging and degradation of the
documents over time. One of such manuscripts is the Dead Sea Scrolls (DSS)
image collection, which poses extreme challenges for the existing binarization
techniques. This article proposes a new binarization technique for the DSS
images using the deep encoder-decoder networks. Although the artificial neural
network proposed here is primarily designed to binarize the DSS images, it can
be trained on different manuscript collections as well. Additionally, the use
of transfer learning makes the network already utilizable for a wide range of
handwritten documents, making it a unique multi-purpose tool for binarization.
Qualitative results and several quantitative comparisons using both historical
manuscripts and datasets from handwritten document image binarization
competition (H-DIBCO and DIBCO) exhibit the robustness and the effectiveness of
the system. The best performing network architecture proposed here is a variant
of the U-Net encoder-decoders.Comment: 26 pages, 15 figures, 11 table