18 research outputs found
CNN-based fast source device identification
Source identification is an important topic in image forensics, since it
allows to trace back the origin of an image. This represents a precious
information to claim intellectual property but also to reveal the authors of
illicit materials. In this paper we address the problem of device
identification based on sensor noise and propose a fast and accurate solution
using convolutional neural networks (CNNs). Specifically, we propose a
2-channel-based CNN that learns a way of comparing camera fingerprint and image
noise at patch level. The proposed solution turns out to be much faster than
the conventional approach and to ensure an increased accuracy. This makes the
approach particularly suitable in scenarios where large databases of images are
analyzed, like over social networks. In this vein, since images uploaded on
social media usually undergo at least two compression stages, we include
investigations on double JPEG compressed images, always reporting higher
accuracy than standard approaches
DIPPAS: A Deep Image Prior PRNU Anonymization Scheme
Source device identification is an important topic in image forensics since
it allows to trace back the origin of an image. Its forensics counter-part is
source device anonymization, that is, to mask any trace on the image that can
be useful for identifying the source device. A typical trace exploited for
source device identification is the Photo Response Non-Uniformity (PRNU), a
noise pattern left by the device on the acquired images. In this paper, we
devise a methodology for suppressing such a trace from natural images without
significant impact on image quality. Specifically, we turn PRNU anonymization
into an optimization problem in a Deep Image Prior (DIP) framework. In a
nutshell, a Convolutional Neural Network (CNN) acts as generator and returns an
image that is anonymized with respect to the source PRNU, still maintaining
high visual quality. With respect to widely-adopted deep learning paradigms,
our proposed CNN is not trained on a set of input-target pairs of images.
Instead, it is optimized to reconstruct the PRNU-free image from the original
image under analysis itself. This makes the approach particularly suitable in
scenarios where large heterogeneous databases are analyzed and prevents any
problem due to lack of generalization. Through numerical examples on publicly
available datasets, we prove our methodology to be effective compared to
state-of-the-art techniques
Conditional Adversarial Camera Model Anonymization
The model of camera that was used to capture a particular photographic image
(model attribution) is typically inferred from high-frequency model-specific
artifacts present within the image. Model anonymization is the process of
transforming these artifacts such that the apparent capture model is changed.
We propose a conditional adversarial approach for learning such
transformations. In contrast to previous works, we cast model anonymization as
the process of transforming both high and low spatial frequency information. We
augment the objective with the loss from a pre-trained dual-stream model
attribution classifier, which constrains the generative network to transform
the full range of artifacts. Quantitative comparisons demonstrate the efficacy
of our framework in a restrictive non-interactive black-box setting.Comment: ECCV 2020 - Advances in Image Manipulation workshop (AIM 2020
Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces
Recent advances in deep learning have enabled forensics researchers to
develop a new class of image splicing detection and localization algorithms.
These algorithms identify spliced content by detecting localized
inconsistencies in forensic traces using Siamese neural networks, either
explicitly during analysis or implicitly during training. At the same time,
deep learning has enabled new forms of anti-forensic attacks, such as
adversarial examples and generative adversarial network (GAN) based attacks.
Thus far, however, no anti-forensic attack has been demonstrated against image
splicing detection and localization algorithms. In this paper, we propose a new
GAN-based anti-forensic attack that is able to fool state-of-the-art splicing
detection and localization algorithms such as EXIF-Net, Noiseprint, and
Forensic Similarity Graphs. This attack operates by adversarially training an
anti-forensic generator against a set of Siamese neural networks so that it is
able to create synthetic forensic traces. Under analysis, these synthetic
traces appear authentic and are self-consistent throughout an image. Through a
series of experiments, we demonstrate that our attack is capable of fooling
forensic splicing detection and localization algorithms without introducing
visually detectable artifacts into an attacked image. Additionally, we
demonstrate that our attack outperforms existing alternative attack approaches.
Multimedia Forensics
This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
Handbook of Digital Face Manipulation and Detection
This open access book provides the first comprehensive collection of studies dealing with the hot topic of digital face manipulation such as DeepFakes, Face Morphing, or Reenactment. It combines the research fields of biometrics and media forensics including contributions from academia and industry. Appealing to a broad readership, introductory chapters provide a comprehensive overview of the topic, which address readers wishing to gain a brief overview of the state-of-the-art. Subsequent chapters, which delve deeper into various research challenges, are oriented towards advanced readers. Moreover, the book provides a good starting point for young researchers as well as a reference guide pointing at further literature. Hence, the primary readership is academic institutions and industry currently involved in digital face manipulation and detection. The book could easily be used as a recommended text for courses in image processing, machine learning, media forensics, biometrics, and the general security area
Media Forensics and DeepFakes: an overview
With the rapid progress of recent years, techniques that generate and
manipulate multimedia content can now guarantee a very advanced level of
realism. The boundary between real and synthetic media has become very thin. On
the one hand, this opens the door to a series of exciting applications in
different fields such as creative arts, advertising, film production, video
games. On the other hand, it poses enormous security threats. Software packages
freely available on the web allow any individual, without special skills, to
create very realistic fake images and videos. So-called deepfakes can be used
to manipulate public opinion during elections, commit fraud, discredit or
blackmail people. Potential abuses are limited only by human imagination.
Therefore, there is an urgent need for automated tools capable of detecting
false multimedia content and avoiding the spread of dangerous false
information. This review paper aims to present an analysis of the methods for
visual media integrity verification, that is, the detection of manipulated
images and videos. Special emphasis will be placed on the emerging phenomenon
of deepfakes and, from the point of view of the forensic analyst, on modern
data-driven forensic methods. The analysis will help to highlight the limits of
current forensic tools, the most relevant issues, the upcoming challenges, and
suggest future directions for research