24 research outputs found
Distinguishing Computer-generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning
Computer-generated graphics (CGs) are images generated by computer software.
The~rapid development of computer graphics technologies has made it easier to
generate photorealistic computer graphics, and these graphics are quite
difficult to distinguish from natural images (NIs) with the naked eye. In this
paper, we propose a method based on sensor pattern noise (SPN) and deep
learning to distinguish CGs from NIs. Before being fed into our convolutional
neural network (CNN)-based model, these images---CGs and NIs---are clipped into
image patches. Furthermore, three high-pass filters (HPFs) are used to remove
low-frequency signals, which represent the image content. These filters are
also used to reveal the residual signal as well as SPN introduced by the
digital camera device. Different from the traditional methods of distinguishing
CGs from NIs, the proposed method utilizes a five-layer CNN to classify the
input image patches. Based on the classification results of the image patches,
we deploy a majority vote scheme to obtain the classification results for the
full-size images. The~experiments have demonstrated that (1) the proposed
method with three HPFs can achieve better results than that with only one HPF
or no HPF and that (2) the proposed method with three HPFs achieves 100\%
accuracy, although the NIs undergo a JPEG compression with a quality factor of
75.Comment: This paper has been published by Sensors. doi:10.3390/s18041296;
Sensors 2018, 18(4), 129
CNN-based fast source device identification
Source identification is an important topic in image forensics, since it
allows to trace back the origin of an image. This represents a precious
information to claim intellectual property but also to reveal the authors of
illicit materials. In this paper we address the problem of device
identification based on sensor noise and propose a fast and accurate solution
using convolutional neural networks (CNNs). Specifically, we propose a
2-channel-based CNN that learns a way of comparing camera fingerprint and image
noise at patch level. The proposed solution turns out to be much faster than
the conventional approach and to ensure an increased accuracy. This makes the
approach particularly suitable in scenarios where large databases of images are
analyzed, like over social networks. In this vein, since images uploaded on
social media usually undergo at least two compression stages, we include
investigations on double JPEG compressed images, always reporting higher
accuracy than standard approaches
Identifikasi Arsip Digital dengan Pendekatan Machine Learning
The development of archive digitization technology, especially cameras, is an advantage for archive users. On the other hand, the ease of falsifying by digitizing records for a particular purpose is a problem for archivists in authenticating digital records. The purpose of this study is to identify digital sources, especially cameras as one of the tools applied in the archive digitization process. The research methodology combines clustering and classification in machine learning to determine 6 brands of cellphone cameras. The total data used in this experiment is 2400 archived digital images. The experimental results show that the identification rate of classification accuracy is 99%. This shows that this study is very effective in determining the authentication of digital archives, especially determining the source of the camera
Tracing images back to their social network of origin: A CNN-based approach
Recovering information about the history of a digital content, such as an image or a video, can be strategic to address an investigation from the early stages. Storage devices, smart-phones and PCs, belonging to a suspect, are usually confiscated as soon as a warrant is issued. Any multimedia content found is analyzed in depth, in order to trace back its provenance and, if possible, its original source. This is particularly important when dealing with social networks, where most of the user-generated photos and videos are uploaded and shared daily. Being able to discern if images are downloaded from a social network or directly captured by a digital camera, can be crucial in leading consecutive investigations. In this paper, we propose a novel method based on convolutional neural networks (CNN) to determine the image provenance, whether it originates from a social network, a messaging application or directly from a photo-camera. By considering only the visual content, the method works irrespective of an eventual manipulation of metadata performed by an attacker. We have tested the proposed technique on three publicly available datasets of images downloaded from seven popular social networks, obtaining state-of-the-art results
Aligned and Non-Aligned Double JPEG Detection Using Convolutional Neural Networks
Due to the wide diffusion of JPEG coding standard, the image forensic
community has devoted significant attention to the development of double JPEG
(DJPEG) compression detectors through the years. The ability of detecting
whether an image has been compressed twice provides paramount information
toward image authenticity assessment. Given the trend recently gained by
convolutional neural networks (CNN) in many computer vision tasks, in this
paper we propose to use CNNs for aligned and non-aligned double JPEG
compression detection. In particular, we explore the capability of CNNs to
capture DJPEG artifacts directly from images. Results show that the proposed
CNN-based detectors achieve good performance even with small size images (i.e.,
64x64), outperforming state-of-the-art solutions, especially in the non-aligned
case. Besides, good results are also achieved in the commonly-recognized
challenging case in which the first quality factor is larger than the second
one.Comment: Submitted to Journal of Visual Communication and Image Representation
(first submission: March 20, 2017; second submission: August 2, 2017
Training CNNs in Presence of JPEG Compression: Multimedia Forensics vs Computer Vision
Convolutional Neural Networks (CNNs) have proved very accurate in multiple
computer vision image classification tasks that required visual inspection in
the past (e.g., object recognition, face detection, etc.). Motivated by these
astonishing results, researchers have also started using CNNs to cope with
image forensic problems (e.g., camera model identification, tampering
detection, etc.). However, in computer vision, image classification methods
typically rely on visual cues easily detectable by human eyes. Conversely,
forensic solutions rely on almost invisible traces that are often very subtle
and lie in the fine details of the image under analysis. For this reason,
training a CNN to solve a forensic task requires some special care, as common
processing operations (e.g., resampling, compression, etc.) can strongly hinder
forensic traces. In this work, we focus on the effect that JPEG has on CNN
training considering different computer vision and forensic image
classification problems. Specifically, we consider the issues that rise from
JPEG compression and misalignment of the JPEG grid. We show that it is
necessary to consider these effects when generating a training dataset in order
to properly train a forensic detector not losing generalization capability,
whereas it is almost possible to ignore these effects for computer vision
tasks