889 research outputs found

    Evaluation of Deep Learning and Conventional Approaches for Image Recaptured Detection in Multimedia Forensics

    Get PDF
    Image recaptured from a high-resolution LED screen or a good quality printer is difficult to distinguish from its original counterpart. The forensic community paid less attention to this type of forgery than to other image alterations such as splicing, copy-move, removal, or image retouching. It is significant to develop secure and automatic techniques to distinguish real and recaptured images without prior knowledge. Image manipulation traces can be hidden using recaptured images. For this reason, being able to detect recapture images becomes a hot research topic for a forensic analyst. The attacker can recapture the manipulated images to fool image forensic system. As far as we know, there is no prior research that has examined the pros and cons of up-to-date image recaptured techniques. The main objective of this survey was to succinctly review the recent outcomes in the field of image recaptured detection and investigated the limitations in existing approaches and datasets. The outcome of this study provides several promising directions for further significant research on image recaptured detection. Finally, some of the challenges in the existing datasets and numerous promising directions on recaptured image detection are proposed to demonstrate how these difficulties might be carried into promising directions for future research. We also discussed the existing image recaptured datasets, their limitations, and dataset collection challenges.publishedVersio

    Image statistical frameworks for digital image forensics

    Get PDF
    The advances of digital cameras, scanners, printers, image editing tools, smartphones, tablet personal computers as well as high-speed networks have made a digital image a conventional medium for visual information. Creation, duplication, distribution, or tampering of such a medium can be easily done, which calls for the necessity to be able to trace back the authenticity or history of the medium. Digital image forensics is an emerging research area that aims to resolve the imposed problem and has grown in popularity over the past decade. On the other hand, anti-forensics has emerged over the past few years as a relatively new branch of research, aiming at revealing the weakness of the forensic technology. These two sides of research move digital image forensic technologies to the next higher level. Three major contributions are presented in this dissertation as follows. First, an effective multi-resolution image statistical framework for digital image forensics of passive-blind nature is presented in the frequency domain. The image statistical framework is generated by applying Markovian rake transform to image luminance component. Markovian rake transform is the applications of Markov process to difference arrays which are derived from the quantized block discrete cosine transform 2-D arrays with multiple block sizes. The efficacy and universality of the framework is then evaluated in two major applications of digital image forensics: 1) digital image tampering detection; 2) classification of computer graphics and photographic images. Second, a simple yet effective anti-forensic scheme is proposed, capable of obfuscating double JPEG compression artifacts, which may vital information for image forensics, for instance, digital image tampering detection. Shrink-and-zoom (SAZ) attack, the proposed scheme, is simply based on image resizing and bilinear interpolation. The effectiveness of SAZ has been evaluated over two promising double JPEG compression schemes and the outcome reveals that the proposed scheme is effective, especially in the cases that the first quality factor is lower than the second quality factor. Third, an advanced textural image statistical framework in the spatial domain is proposed, utilizing local binary pattern (LBP) schemes to model local image statistics on various kinds of residual images including higher-order ones. The proposed framework can be implemented either in single- or multi-resolution setting depending on the nature of application of interest. The efficacy of the proposed framework is evaluated on two forensic applications: 1) steganalysis with emphasis on HUGO (Highly Undetectable Steganography), an advanced steganographic scheme embedding hidden data in a content-adaptive manner locally into some image regions which are difficult for modeling image statics; 2) image recapture detection (IRD). The outcomes of the evaluations suggest that the proposed framework is effective, not only for detecting local changes which is in line with the nature of HUGO, but also for detecting global difference (the nature of IRD)

    An Investigation into the Application of the Meijering Filter for Document Recapture Detection

    Get PDF
    The proliferation of mobile devices allows financial institutions to offer remote customer services, such as remote account opening. Manipulation of identity documents using image processing software is a low-cost, high-risk threat to modern financial systems, opening these institutions to fraud through crimes related to identity theft. In this paper we describe our exploratory research into the application of biomedical image algorithms to the domain of document recapture detection. We perform a statistical analysis to compare different types of recaptured documents and train a support vector machine classifier on the raw histogram data generated using the Meijering filter. The results show that there is potential in biomedical imaging algorithms, such as the Meijering filter, as a form of texture analysis that help identify recaptured documents

    Directional Sensitivity of Gaze-Collinearity Features in Liveness Detection

    Get PDF
    To increase the trust in using face recognition systems, these need to be capable of differentiating between face images captured from a real person and those captured from photos or similar artifacts presented at the sensor. Methods have been published for face liveness detection by measuring the gaze of a user while the user tracks an object on the screen, which appears at pre-defined, places randomly. In this paper we explore the sensitivity of such a system to different stimulus alignments. The aim is to establish whether there is such sensitivity and if so to explore how this may be exploited for improving the design of the stimulus. The results suggest that collecting feature points along the horizontal direction is more effective than the vertical direction for liveness detection

    CTP-Net: Character Texture Perception Network for Document Image Forgery Localization

    Full text link
    Due to the progression of information technology in recent years, document images have been widely disseminated in social networks. With the help of powerful image editing tools, document images are easily forged without leaving visible manipulation traces, which leads to severe issues if significant information is falsified for malicious use. Therefore, the research of document image forensics is worth further exploring. In a document image, the character with specific semantic information is most vulnerable to tampering, for which capturing the forgery traces of the character is the key to localizing the forged region in document images. Considering both character and image textures, in this paper, we propose a Character Texture Perception Network (CTP-Net) to localize the forgery of document images. Based on optical character recognition, a Character Texture Stream (CTS) is designed to capture features of text areas that are essential components of a document image. Meanwhile, texture features of the whole document image are exploited by an Image Texture Stream (ITS). Combining the features extracted from the CTS and the ITS, the CTP-Net can reveal more subtle forgery traces from document images. To overcome the challenge caused by the lack of fake document images, we design a data generation strategy that is utilized to construct a Fake Chinese Trademark dataset (FCTM). Through a series of experiments, we show that the proposed CTP-Net is able to capture tampering traces in document images, especially in text regions. Experimental results demonstrate that CTP-Net can localize multi-scale forged areas in document images and outperform the state-of-the-art forgery localization methods

    An image recapture detection algorithm based on learning dictionaries of edge profiles

    Get PDF
    With today's digital camera technology, high-quality images can be recaptured from an liquid crystal display (LCD) monitor screen with relative ease. An attacker may choose to recapture a forged image in order to conceal imperfections and to increase its authenticity. In this paper, we address the problem of detecting images recaptured from LCD monitors. We provide a comprehensive overview of the traces found in recaptured images, and we argue that aliasing and blurriness are the least scene dependent features. We then show how aliasing can be eliminated by setting the capture parameters to predetermined values. Driven by this finding, we propose a recapture detection algorithm based on learned edge blurriness. Two sets of dictionaries are trained using the K-singular value decomposition approach from the line spread profiles of selected edges from single captured and recaptured images. An support vector machine classifier is then built using dictionary approximation errors and the mean edge spread width from the training images. The algorithm, which requires no user intervention, was tested on a database that included more than 2500 high-quality recaptured images. Our results show that our method achieves a performance rate that exceeds 99% for recaptured images and 94% for single captured images
    corecore