6 research outputs found

    Image splicing detection scheme using adaptive threshold mean ternary pattern descriptor

    Get PDF
    The rapid growth of image editing applications has an impact on image forgery cases. Image forgery is a big challenge in authentic image identification. Images can be readily altered using post-processing effects, such as blurring shallow depth, JPEG compression, homogenous regions, and noise to forge the image. Besides, the process can be applied in the spliced image to produce a composite image. Thus, there is a need to develop a scheme of image forgery detection for image splicing. In this research, suitable features of the descriptors for the detection of spliced forgery are defined. These features will reduce the impact of blurring shallow depth, homogenous area, and noise attacks to improve the accuracy. Therefore, a technique to detect forgery at the image level of the image splicing was designed and developed. At this level, the technique involves four important steps. Firstly, convert colour image to three colour channels followed by partition of image into overlapping block and each block is partitioned into non-overlapping cells. Next, Adaptive Thresholding Mean Ternary Pattern Descriptor (ATMTP) is applied on each cell to produce six ATMTP codes and finally, the tested image is classified. In the next part of the scheme, detected forgery object in the spliced image involves five major steps. Initially, similarity among every neighbouring district is computed and the two most comparable areas are assembled together to the point that the entire picture turns into a single area. Secondly, merge similar regions according to specific state, which satisfies the condition of fewer than four pixels between similar regions that lead to obtaining the desired regions to represent objects that exist in the spliced image. Thirdly, select random blocks from the edge of the binary image based on the binary mask. Fourthly, for each block, the Gabor Filter feature is extracted to assess the edges extracted of the segmented image. Finally, the Support Vector Machine (SVM) is used to classify the images. Evaluation of the scheme was experimented using three sets of standard datasets, namely, the Institute of Automation, Chinese Academy of Sciences (CASIA) version TIDE 1.0 and 2.0, and Columbia University. The results showed that, the ATMTP achieved higher accuracy of 98.95%, 99.03% and 99.17% respectively for each set of datasets. Therefore, the findings of this research has proven the significant contribution of the scheme in improving image forgery detection. It is recommended that the scheme be further improved in the future by considering geometrical perspective

    Novel Methods for Forensic Multimedia Data Analysis: Part I

    Get PDF
    The increased usage of digital media in daily life has resulted in the demand for novel multimedia data analysis techniques that can help to use these data for forensic purposes. Processing of such data for police investigation and as evidence in a court of law, such that data interpretation is reliable, trustworthy, and efficient in terms of human time and other resources required, will help greatly to speed up investigation and make investigation more effective. If such data are to be used as evidence in a court of law, techniques that can confirm origin and integrity are necessary. In this chapter, we are proposing a new concept for new multimedia processing techniques for varied multimedia sources. We describe the background and motivation for our work. The overall system architecture is explained. We present the data to be used. After a review of the state of the art of related work of the multimedia data we consider in this work, we describe the method and techniques we are developing that go beyond the state of the art. The work will be continued in a Chapter Part II of this topic

    Image statistical frameworks for digital image forensics

    Get PDF
    The advances of digital cameras, scanners, printers, image editing tools, smartphones, tablet personal computers as well as high-speed networks have made a digital image a conventional medium for visual information. Creation, duplication, distribution, or tampering of such a medium can be easily done, which calls for the necessity to be able to trace back the authenticity or history of the medium. Digital image forensics is an emerging research area that aims to resolve the imposed problem and has grown in popularity over the past decade. On the other hand, anti-forensics has emerged over the past few years as a relatively new branch of research, aiming at revealing the weakness of the forensic technology. These two sides of research move digital image forensic technologies to the next higher level. Three major contributions are presented in this dissertation as follows. First, an effective multi-resolution image statistical framework for digital image forensics of passive-blind nature is presented in the frequency domain. The image statistical framework is generated by applying Markovian rake transform to image luminance component. Markovian rake transform is the applications of Markov process to difference arrays which are derived from the quantized block discrete cosine transform 2-D arrays with multiple block sizes. The efficacy and universality of the framework is then evaluated in two major applications of digital image forensics: 1) digital image tampering detection; 2) classification of computer graphics and photographic images. Second, a simple yet effective anti-forensic scheme is proposed, capable of obfuscating double JPEG compression artifacts, which may vital information for image forensics, for instance, digital image tampering detection. Shrink-and-zoom (SAZ) attack, the proposed scheme, is simply based on image resizing and bilinear interpolation. The effectiveness of SAZ has been evaluated over two promising double JPEG compression schemes and the outcome reveals that the proposed scheme is effective, especially in the cases that the first quality factor is lower than the second quality factor. Third, an advanced textural image statistical framework in the spatial domain is proposed, utilizing local binary pattern (LBP) schemes to model local image statistics on various kinds of residual images including higher-order ones. The proposed framework can be implemented either in single- or multi-resolution setting depending on the nature of application of interest. The efficacy of the proposed framework is evaluated on two forensic applications: 1) steganalysis with emphasis on HUGO (Highly Undetectable Steganography), an advanced steganographic scheme embedding hidden data in a content-adaptive manner locally into some image regions which are difficult for modeling image statics; 2) image recapture detection (IRD). The outcomes of the evaluations suggest that the proposed framework is effective, not only for detecting local changes which is in line with the nature of HUGO, but also for detecting global difference (the nature of IRD)

    Single-view recaptured image detection based on physics-based features

    No full text

    Trustworthy Biometric Verification under Spoofing Attacks:Application to the Face Mode

    Get PDF
    The need for automation of the identity recognition process for a vast number of applications resulted in great advancement of biometric systems in the recent years. Yet, many studies indicate that these systems suffer from vulnerabilities to spoofing (presentation) attacks: a weakness that may compromise their usage in many cases. Face verification systems account for one of the most attractive spoofing targets, due to the easy access to face images of users, as well as the simplicity of the spoofing attack manufacturing process. Many counter-measures to spoofing have been proposed in the literature. They are based on different cues that are used to distinguish between real accesses and spoofing attacks. The task of detecting spoofing attacks is most often considered as a binary classification problem, with real accesses being the positive class and spoofing attacks being the negative class. The main objective of this thesis is to put the problem of anti-spoofing in a wider context, with an accent on its cooperation with a biometric verification system. In such a context, it is important to adopt an integrated perspective on biometric verification and anti-spoofing. In this thesis we identify and address three points where integration of the two systems is of interest. The first integration point is situated at input-level. At this point, we are concerned with providing a unified information that both verification and anti-spoofing systems use. The unified information includes the samples used to enroll clients in the system, as well as the identity claims of the client at query time. We design two anti-spoofing schemes, one with a generative and one with a discriminative approach, which we refer to as client-specific, as opposed to the traditional client-independent ones. The proposed methods are applied on several case studies for the face mode. Overall, the experimental results prove the integration to be beneficial for creating trustworthy face verification systems. At input-level, the results show the advantage of the client-specific approaches over the client-independent ones. At output-level, they present a comparison of the fusion methods. The case studies are furthermore used to demonstrate the EPS framework and its potential in evaluation of biometric verification systems under spoofing attacks. The source code for the full set of methods is available as free software, as a satellite package to the free signal processing and machine learning toolbox Bob. It can be used to reproduce the results of the face mode case studies presented in this thesis, as well as to perform additional analysis and improve the proposed methods. Furthermore, it can be used to design case studies applying the proposed methods to other biometric modes. At the second integration point, situated at output-level, we address the issue of combining the output of biometric verification and anti-spoofing systems in order to achieve an optimal combined decision about an input sample. We adopt a multiple expert fusion approach and we investigate several fusion methods, comparing the verification performance and robustness to spoofing of the fused systems. The third integration point is associated with the evaluation process. The integrated perspective implies three types of inputs for the biometric system: real accesses, zero-effort impostors and spoofing attacks. We propose an evaluation methodology for biometric verification systems under spoofing attacks, called Expected Performance and Spoofability (EPS) framework, which accounts for all the three types of input and the error rates associated with them. Within this framework, we propose the EPS Curve (EPSC), which enables unbiased comparison of systems
    corecore