3,682 research outputs found
OCR for TIFF Compressed Document Images Directly in Compressed Domain Using Text segmentation and Hidden Markov Model
In today's technological era, document images play an important and integral
part in our day to day life, and specifically with the surge of Covid-19,
digitally scanned documents have become key source of communication, thus
avoiding any sort of infection through physical contact. Storage and
transmission of scanned document images is a very memory intensive task, hence
compression techniques are being used to reduce the image size before archival
and transmission. To extract information or to operate on the compressed
images, we have two ways of doing it. The first way is to decompress the image
and operate on it and subsequently compress it again for the efficiency of
storage and transmission. The other way is to use the characteristics of the
underlying compression algorithm to directly process the images in their
compressed form without involving decompression and re-compression. In this
paper, we propose a novel idea of developing an OCR for CCITT (The
International Telegraph and Telephone Consultative Committee) compressed
machine printed TIFF document images directly in the compressed domain. After
segmenting text regions into lines and words, HMM is applied for recognition
using three coding modes of CCITT- horizontal, vertical and the pass mode.
Experimental results show that OCR on pass modes give a promising results.Comment: The paper has 14 figures and 1 tabl
A Bottom Up Procedure for Text Line Segmentation of Latin Script
In this paper we present a bottom up procedure for segmentation of text lines
written or printed in the Latin script. The proposed method uses a combination
of image morphology, feature extraction and Gaussian mixture model to perform
this task. The experimental results show the validity of the procedure.Comment: Accepted and presented at the IEEE conference "International
Conference on Advances in Computing, Communications and Informatics (ICACCI)
2017
Entropy Computation of Document Images in Run-Length Compressed Domain
Compression of documents, images, audios and videos have been traditionally
practiced to increase the efficiency of data storage and transfer. However, in
order to process or carry out any analytical computations, decompression has
become an unavoidable pre-requisite. In this research work, we have attempted
to compute the entropy, which is an important document analytic directly from
the compressed documents. We use Conventional Entropy Quantifier (CEQ) and
Spatial Entropy Quantifiers (SEQ) for entropy computations [1]. The entropies
obtained are useful in applications like establishing equivalence, word
spotting and document retrieval. Experiments have been performed with all the
data sets of [1], at character, word and line levels taking compressed
documents in run-length compressed domain. The algorithms developed are
computational and space efficient, and results obtained match 100% with the
results reported in [1].Comment: Published in IEEE Proceedings 2014 Fifth International Conference on
Signals and Image Processin
Character Recognition
Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field
Information Preserving Processing of Noisy Handwritten Document Images
Many pre-processing techniques that normalize artifacts and clean noise induce anomalies due to discretization of the document image. Important information that could be used at later stages may be lost. A proposed composite-model framework takes into account pre-printed information, user-added data, and digitization characteristics. Its benefits are demonstrated by experiments with statistically significant results. Separating pre-printed ruling lines from user-added handwriting shows how ruling lines impact people\u27s handwriting and how they can be exploited for identifying writers. Ruling line detection based on multi-line linear regression reduces the mean error of counting them from 0.10 to 0.03, 6.70 to 0.06, and 0.13 to 0.02, com- pared to an HMM-based approach on three standard test datasets, thereby reducing human correction time by 50%, 83%, and 72% on average. On 61 page images from 16 rule-form templates, the precision and recall of form cell recognition are increased by 2.7% and 3.7%, compared to a cross-matrix approach. Compensating for and exploiting ruling lines during feature extraction rather than pre-processing raises the writer identification accuracy from 61.2% to 67.7% on a 61-writer noisy Arabic dataset. Similarly, counteracting page-wise skew by subtracting it or transforming contours in a continuous coordinate system during feature extraction improves the writer identification accuracy. An implementation study of contour-hinge features reveals that utilizing the full probabilistic probability distribution function matrix improves the writer identification accuracy from 74.9% to 79.5%
- …