100 research outputs found

    Image counter-forensics based on feature injection

    Get PDF
    Starting from the concept that many image forensic tools are based on the detection of some features revealing a particular aspect of the history of an image, in this work we model the counter-forensic attack as the injection of a specific fake feature pointing to the same history of an authentic reference image. We propose a general attack strategy that does not rely on a specific detector structure. Given a source image x and a target image y, the adversary processes x in the pixel domain producing an attacked image (x) over tilde, perceptually similar to x, whose feature f((x) over tilde) is as close as possible to f (y) computed on y. Our proposed counter-forensic attack consists in the constrained minimization of the feature distance Phi(z) = vertical bar f (z) f (y) vertical bar through iterative methods based on gradient descent. To solve the intrinsic limit due to the numerical estimation of the gradient on large images, we propose the application of a feature decomposition process, that allows the problem to be reduced into many subproblems on the blocks the image is partitioned into. The proposed strategy has been tested by attacking three different features and its performance has been compared to state-of-the-art counter-forensic methods

    Digital Multimedia Forensics and Anti-Forensics

    Get PDF
    As the use of digital multimedia content such as images and video has increased, so has the means and the incentive to create digital forgeries. Presently, powerful editing software allows forgers to create perceptually convincing digital forgeries. Accordingly, there is a great need for techniques capable of authenticating digital multimedia content. In response to this, researchers have begun developing digital forensic techniques capable of identifying digital forgeries. These forensic techniques operate by detecting imperceptible traces left by editing operations in digital multimedia content. In this dissertation, we propose several new digital forensic techniques to detect evidence of editing in digital multimedia content. We begin by identifying the fingerprints left by pixel value mappings and show how these can be used to detect the use of contrast enhancement in images. We use these fingerprints to perform a number of additional forensic tasks such as identifying cut-and-paste forgeries, detecting the addition of noise to previously JPEG compressed images, and estimating the contrast enhancement mapping used to alter an image. Additionally, we consider the problem of multimedia security from the forger's point of view. We demonstrate that an intelligent forger can design anti-forensic operations to hide editing fingerprints and fool forensic techniques. We propose an anti-forensic technique to remove compression fingerprints from digital images and show that this technique can be used to fool several state-of-the-art forensic algorithms. We examine the problem of detecting frame deletion in digital video and develop both a technique to detect frame deletion and an anti-forensic technique to hide frame deletion fingerprints. We show that this anti-forensic operation leaves behind fingerprints of its own and propose a technique to detect the use of frame deletion anti-forensics. The ability of a forensic investigator to detect both editing and the use of anti-forensics results in a dynamic interplay between the forger and forensic investigator. We use develop a game theoretic framework to analyze this interplay and identify the set of actions that each party will rationally choose. Additionally, we show that anti-forensics can be used protect against reverse engineering. To demonstrate this, we propose an anti-forensic module that can be integrated into digital cameras to protect color interpolation methods

    Fundamental Limits in Multimedia Forensics and Anti-forensics

    Get PDF
    As the use of multimedia editing tools increases, people become questioning the authenticity of multimedia content. This is specially a big concern for authorities, such as law enforcement, news reporter and government, who constantly use multimedia evidence to make critical decisions. To verify the authenticity of multimedia content, many forensic techniques have been proposed to identify the processing history of multimedia content under question. However, as new technologies emerge and more complicated scenarios are considered, the limitation of multimedia forensics has been gradually realized by forensic researchers. It is the inevitable trend in multimedia forensics to explore the fundamental limits. In this dissertation, we propose several theoretical frameworks to study the fundamental limits in various forensic problems. Specifically, we begin by developing empirical forensic techniques to deal with the limitation of existing techniques due to the emergence of new technology, compressive sensing. Then, we go one step further to explore the fundamental limit of forensic performance. Two types of forensic problems have been examined. In operation forensics, we propose an information theoretical framework and define forensicability as the maximum information features contain about hypotheses of processing histories. Based on this framework, we have found the maximum number of JPEG compressions one can detect. In order forensics, an information theoretical criterion is proposed to determine when we can and cannot detect the order of manipulation operations that have been applied on multimedia content. Additionally, we have examined the fundamental tradeoffs in multimedia antiforensics, where attacking techniques are developed by forgers to conceal manipulation fingerprints and confuse forensic investigations. In this field, we have defined concealability as the effectiveness of anti-forensics concealing manipulation fingerprints. Then, a tradeoff between concealability, rate and distortion is proposed and characterized for compression anti-forensics, which provides us valuable insights of how forgers may behave under their best strategy

    Resiliency Assessment and Enhancement of Intrinsic Fingerprinting

    Get PDF
    Intrinsic fingerprinting is a class of digital forensic technology that can detect traces left in digital multimedia data in order to reveal data processing history and determine data integrity. Many existing intrinsic fingerprinting schemes have implicitly assumed favorable operating conditions whose validity may become uncertain in reality. In order to establish intrinsic fingerprinting as a credible approach to digital multimedia authentication, it is important to understand and enhance its resiliency under unfavorable scenarios. This dissertation addresses various resiliency aspects that can appear in a broad range of intrinsic fingerprints. The first aspect concerns intrinsic fingerprints that are designed to identify a particular component in the processing chain. Such fingerprints are potentially subject to changes due to input content variations and/or post-processing, and it is desirable to ensure their identifiability in such situations. Taking an image-based intrinsic fingerprinting technique for source camera model identification as a representative example, our investigations reveal that the fingerprints have a substantial dependency on image content. Such dependency limits the achievable identification accuracy, which is penalized by a mismatch between training and testing image content. To mitigate such a mismatch, we propose schemes to incorporate image content into training image selection and significantly improve the identification performance. We also consider the effect of post-processing against intrinsic fingerprinting, and study source camera identification based on imaging noise extracted from low-bit-rate compressed videos. While such compression reduces the fingerprint quality, we exploit different compression levels within the same video to achieve more efficient and accurate identification. The second aspect of resiliency addresses anti-forensics, namely, adversarial actions that intentionally manipulate intrinsic fingerprints. We investigate the cost-effectiveness of anti-forensic operations that counteract color interpolation identification. Our analysis pinpoints the inherent vulnerabilities of color interpolation identification, and motivates countermeasures and refined anti-forensic strategies. We also study the anti-forensics of an emerging space-time localization technique for digital recordings based on electrical network frequency analysis. Detection schemes against anti-forensic operations are devised under a mathematical framework. For both problems, game-theoretic approaches are employed to characterize the interplay between forensic analysts and adversaries and to derive optimal strategies. The third aspect regards the resilient and robust representation of intrinsic fingerprints for multiple forensic identification tasks. We propose to use the empirical frequency response as a generic type of intrinsic fingerprint that can facilitate the identification of various linear and shift-invariant (LSI) and non-LSI operations

    Taxonomy for Anti-Forensics Techniques & Countermeasures

    Get PDF
    Computer Forensic Tools are used by forensics investigators to analyze evidence from the seized devices collected at a crime scene or from a person, in such ways that the results or findings can be used in a court of law. These computer forensic tools are very important and useful as they help the law enforcement personnel to solve crimes. Computer criminals are now aware of the forensics tools used; therefore, they use countermeasure techniques to efficiently obstruct the investigation processes. By doing so, they make it difficult or almost impossible for investigators to uncover the evidence. These techniques, used against the computer forensics processes, are called Anti-forensics. This paper describes some of the many anti-forensics’ method, techniques and tools using a taxonomy. The taxonomy classified anti-forensics into different levels and different categories: WHERE, WHICH, WHAT, and HOW. The WHERE level indicates where anti-forensics can occur during an investigation. The WHICH level indicates which anti-forensics techniques exist. The WHAT level defines the exact method used for each technique. Finally, the HOW level indicates the tools used. Additionally, some countermeasures were proposed

    An Overview on Image Forensics

    Get PDF
    The aim of this survey is to provide a comprehensive overview of the state of the art in the area of image forensics. These techniques have been designed to identify the source of a digital image or to determine whether the content is authentic or modified, without the knowledge of any prior information about the image under analysis (and thus are defined as passive). All these tools work by detecting the presence, the absence, or the incongruence of some traces intrinsically tied to the digital image by the acquisition device and by any other operation after its creation. The paper has been organized by classifying the tools according to the position in the history of the digital image in which the relative footprint is left: acquisition-based methods, coding-based methods, and editing-based schemes

    Preprocessing reference sensor pattern noise via spectrum equalization

    Get PDF
    Although sensor pattern noise (SPN) has been proven to be an effective means to uniquely identify digital cameras, some non-unique artifacts, shared amongst cameras undergo the same or similar in-camera processing procedures, often give rise to false identifications. Therefore, it is desirable and necessary to suppress these unwanted artifacts so as to improve the accuracy and reliability. In this work, we propose a novel preprocessing approach for attenuating the influence of the nonunique artifacts on the reference SPN to reduce the false identification rate. Specifically, we equalize the magnitude spectrum of the reference SPN through detecting and suppressing the peaks according to the local characteristics, aiming at removing the interfering periodic artifacts. Combined with 6 SPN extraction or enhancement methods, our proposed Spectrum Equalization Algorithm (SEA) is evaluated on the Dresden image database as well as our own database, and compared with the state-of-the-art preprocessing schemes. Experimental results indicate that the proposed procedure outperforms, or at least performs comparably to, the existing methods in terms of the overall ROC curve and kappa statistic computed from a confusion matrix, and tends to be more resistant to JPEG compression for medium and small image blocks

    SSD-GAN: Measuring the Realness in the Spatial and Spectral Domains

    Full text link
    This paper observes that there is an issue of high frequencies missing in the discriminator of standard GAN, and we reveal it stems from downsampling layers employed in the network architecture. This issue makes the generator lack the incentive from the discriminator to learn high-frequency content of data, resulting in a significant spectrum discrepancy between generated images and real images. Since the Fourier transform is a bijective mapping, we argue that reducing this spectrum discrepancy would boost the performance of GANs. To this end, we introduce SSD-GAN, an enhancement of GANs to alleviate the spectral information loss in the discriminator. Specifically, we propose to embed a frequency-aware classifier into the discriminator to measure the realness of the input in both the spatial and spectral domains. With the enhanced discriminator, the generator of SSD-GAN is encouraged to learn high-frequency content of real data and generate exact details. The proposed method is general and can be easily integrated into most existing GANs framework without excessive cost. The effectiveness of SSD-GAN is validated on various network architectures, objective functions, and datasets. Code will be available at https://github.com/cyq373/SSD-GAN.Comment: Accepted to AAAI 2021. Code: https://github.com/cyq373/SSD-GA
    • …
    corecore