2,286 research outputs found
Autoencoder with recurrent neural networks for video forgery detection
Video forgery detection is becoming an important issue in recent years,
because modern editing software provide powerful and easy-to-use tools to
manipulate videos. In this paper we propose to perform detection by means of
deep learning, with an architecture based on autoencoders and recurrent neural
networks. A training phase on a few pristine frames allows the autoencoder to
learn an intrinsic model of the source. Then, forged material is singled out as
anomalous, as it does not fit the learned model, and is encoded with a large
reconstruction error. Recursive networks, implemented with the long short-term
memory model, are used to exploit temporal dependencies. Preliminary results on
forged videos show the potential of this approach.Comment: Presented at IS&T Electronic Imaging: Media Watermarking, Security,
and Forensics, January 201
Boosting Image Forgery Detection using Resampling Features and Copy-move analysis
Realistic image forgeries involve a combination of splicing, resampling,
cloning, region removal and other methods. While resampling detection
algorithms are effective in detecting splicing and resampling, copy-move
detection algorithms excel in detecting cloning and region removal. In this
paper, we combine these complementary approaches in a way that boosts the
overall accuracy of image manipulation detection. We use the copy-move
detection method as a pre-filtering step and pass those images that are
classified as untampered to a deep learning based resampling detection
framework. Experimental results on various datasets including the 2017 NIST
Nimble Challenge Evaluation dataset comprising nearly 10,000 pristine and
tampered images shows that there is a consistent increase of 8%-10% in
detection rates, when copy-move algorithm is combined with different resampling
detection algorithms
Do GANs leave artificial fingerprints?
In the last few years, generative adversarial networks (GAN) have shown
tremendous potential for a number of applications in computer vision and
related fields. With the current pace of progress, it is a sure bet they will
soon be able to generate high-quality images and videos, virtually
indistinguishable from real ones. Unfortunately, realistic GAN-generated images
pose serious threats to security, to begin with a possible flood of fake
multimedia, and multimedia forensic countermeasures are in urgent need. In this
work, we show that each GAN leaves its specific fingerprint in the images it
generates, just like real-world cameras mark acquired images with traces of
their photo-response non-uniformity pattern. Source identification experiments
with several popular GANs show such fingerprints to represent a precious asset
for forensic analyses
CNN-based fast source device identification
Source identification is an important topic in image forensics, since it
allows to trace back the origin of an image. This represents a precious
information to claim intellectual property but also to reveal the authors of
illicit materials. In this paper we address the problem of device
identification based on sensor noise and propose a fast and accurate solution
using convolutional neural networks (CNNs). Specifically, we propose a
2-channel-based CNN that learns a way of comparing camera fingerprint and image
noise at patch level. The proposed solution turns out to be much faster than
the conventional approach and to ensure an increased accuracy. This makes the
approach particularly suitable in scenarios where large databases of images are
analyzed, like over social networks. In this vein, since images uploaded on
social media usually undergo at least two compression stages, we include
investigations on double JPEG compressed images, always reporting higher
accuracy than standard approaches
- …