2 research outputs found

    A New Forged Handwriting Detection Method Based on Fourier Spectral Density and Variation

    Full text link
    Use of handwriting words for person identification in contrast to biometric features is gaining importance in the field of forensic applications. As a result, forging handwriting is a part of crime applications and hence is challenging for the researchers. This paper presents a new work for detecting forged handwriting words because width and amplitude of spectral distributions have the ability to exhibit unique properties for forged handwriting words compared to blurred, noisy and normal handwriting words. The proposed method studies spectral density and variation of input handwriting images through clustering of high and low frequency coefficients. The extracted features, which are invariant to rotation and scaling, are passed to a neural network classifier for the classification for forged handwriting words from other types of handwriting words (like blurred, noisy and normal handwriting words). Experimental results on our own dataset, which consists of four handwriting word classes, and two benchmark datasets, namely, caption and scene text classification and forged IMEI number dataset, show that the proposed method outperforms the existing methods in terms of classification rate

    A new RGB based fusion for forged IMEI number detection in mobile images

    No full text
    As technology advances to make living comfortable for people, at the same time, different crimes also increase. One such sensitive crime is creating fake International Mobile Equipment Identity (IMEI) for smart mobile devices. In this paper, we present a new fusion based method using R, G and B color components for detecting forged IMEI numbers. To the best of our knowledge, this is the first work for forged IMEI number detection in mobile images. The proposed method first finds variances for R, G and B images of a forged input image to study local changes. The variances are used to derive weights for respective color components. The same weights are convolved with respective pixel values of R, G and B components, which results in the fused image. For the fused image, the proposed method extracts features based on sparsity, the number of connected components, and the average intensity values for edge components in respective R, G and B components, which gives six features. The proposed method finds absolute difference between fused and input images, which gives feature vector containing six difference values. The proposed method constructs templates based on samples chosen randomly. Feature vectors are compared with the templates for detecting forged IMEI numbers. Experiments are conducted on our own dataset and standard datasets to evaluate the proposed method. Furthermore, comparative studies with the related existing methods show that the proposed method outperforms the existing methods
    corecore