194 research outputs found

    Image Enhancement with Statistical Estimation

    Full text link
    Contrast enhancement is an important area of research for the image analysis. Over the decade, the researcher worked on this domain to develop an efficient and adequate algorithm. The proposed method will enhance the contrast of image using Binarization method with the help of Maximum Likelihood Estimation (MLE). The paper aims to enhance the image contrast of bimodal and multi-modal images. The proposed methodology use to collect mathematical information retrieves from the image. In this paper, we are using binarization method that generates the desired histogram by separating image nodes. It generates the enhanced image using histogram specification with binarization method. The proposed method has showed an improvement in the image contrast enhancement compare with the other image.Comment: 9 pages,6 figures; ISSN:0975-5578 (Online); 0975-5934 (Print

    Automated framework for robust content-based verification of print-scan degraded text documents

    Get PDF
    Fraudulent documents frequently cause severe financial damages and impose security breaches to civil and government organizations. The rapid advances in technology and the widespread availability of personal computers has not reduced the use of printed documents. While digital documents can be verified by many robust and secure methods such as digital signatures and digital watermarks, verification of printed documents still relies on manual inspection of embedded physical security mechanisms.The objective of this thesis is to propose an efficient automated framework for robust content-based verification of printed documents. The principal issue is to achieve robustness with respect to the degradations and increased levels of noise that occur from multiple cycles of printing and scanning. It is shown that classic OCR systems fail under such conditions, moreover OCR systems typically rely heavily on the use of high level linguistic structures to improve recognition rates. However inferring knowledge about the contents of the document image from a-priori statistics is contrary to the nature of document verification. Instead a system is proposed that utilizes specific knowledge of the document to perform highly accurate content verification based on a Print-Scan degradation model and character shape recognition. Such specific knowledge of the document is a reasonable choice for the verification domain since the document contents are already known in order to verify them.The system analyses digital multi font PDF documents to generate a descriptive summary of the document, referred to as \Document Description Map" (DDM). The DDM is later used for verifying the content of printed and scanned copies of the original documents. The system utilizes 2-D Discrete Cosine Transform based features and an adaptive hierarchical classifier trained with synthetic data generated by a Print-Scan degradation model. The system is tested with varying degrees of Print-Scan Channel corruption on a variety of documents with corruption produced by repetitive printing and scanning of the test documents. Results show the approach achieves excellent accuracy and robustness despite the high level of noise

    Detection of counterfeit coins based on 3D Height-Map Image Analysis

    Get PDF
    Analyzing 3-D height-map images leads to the discovery of a new set of features that cannot be extracted or even seen in 2-D images. To the best of our knowledge, there was no research in the literature analyzing height-map images to detect counterfeit coins or to classify coins. The main goal of this thesis is to propose a new comprehensive method for analyzing 3D height-map images to detect counterfeit of any type of coins regardless of their country of origin, language, shape, and quality. Therefore, we applied a precise 3-D scanner to produce coin height-map images, since detecting a counterfeit coin using 2D image processing is nearly impossible in some cases, especially when the coin is damaged, corroded or worn out. In this research, we propose some 3-D approaches to model and analyze several large datasets. In our first and second methods, we aimed to solve the degradation problem of shiny coin images due to the scanning process. To solve this problem, first, the characters of the coin images were straightened by a proposed straightening algorithm. The height-map image, then, was decomposed row-wise to a set of 1-D signals, which were analyzed separately and restored by two different proposed methods. These approaches produced remarkable results. We also proposed a 3-D approach to detect and analyze the precipice borders from the coin surface and extract significant features that ignored the degradation problem. To extract the features, we also proposed Binned Borders in Spherical Coordinates (BBSC) to analyze different parts of precipice borders at different polar and azimuthal angles. We also took advantage of stack generalization to classify the coins and add a reject option to increase the reliability of the system. The results illustrate that the proposed method outperforms other counterfeit coin detectors. Since there are traces of deep learning in most recent research related to image processing, it is worthwhile to benefit from deep learning approaches in our study. In another proposed method of this thesis, we applied deep learning algorithms in two steps to detect counterfeit coins. As Generative Adversarial Network is being used for generating fake images in image processing applications, we proposed a novel method based on this network to augment our fake coin class and compensate for the lack of fake coins for training the classifier. We also decomposed the coin height-map image into three types of Steep, Moderate, and Gentle slopes. Therefore, the grayscale height-map image is turned to the proposed SMG height-map channel. Then, we proposed a hybrid CNN-based deep neural network to train and classify these new SMG images. The results illustrated that a deep neural network trained with the proposed SMG images outperforms the system trained by the grayscale images. In this research, the proposed methods were trained and tested with four types of Danish and two types of Chinese coins with encouraging results

    Character Recognition

    Get PDF
    Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field

    Adaptive Methods for Robust Document Image Understanding

    Get PDF
    A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current defficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexity- and threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy

    Feature Extraction Methods for Character Recognition

    Get PDF
    Not Include

    Advances in Character Recognition

    Get PDF
    This book presents advances in character recognition, and it consists of 12 chapters that cover wide range of topics on different aspects of character recognition. Hopefully, this book will serve as a reference source for academic research, for professionals working in the character recognition field and for all interested in the subject

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications
    • …
    corecore