291 research outputs found
CoMoFoD #x2014; New database for copy-move forgery detection
Due to the availability of many sophisticated image processing tools, a digital image forgery is nowadays very often used. One of the common forgery method is a copy-move forgery, where part of an image is copied to another location in the same image with the aim of hiding or adding some image content. Numerous algorithms have been proposed for a copy-move forgery detection (CMFD), but there exist only few benchmarking databases for algorithms evaluation. We developed new database for a CMFD that consist of 260 forged image sets. Every image set includes forged image, two masks and original image. Images are grouped in 5 categories according to applied manipulation: translation, rotation, scaling, combination and distortion. Also, postprocessing methods, such as JPEG compression, blurring, noise adding, color reduction etc., are applied at all forged and original images. In this paper we present database organization and content, creation of forged images, postprocessing methods, and database testing. CoMoFoD database is available at http://www.vcl.fer.hr/comofodMinistry of Science, Education and Sport, China; project numbers: 036-0361630-1635 and 036-0361630-164
File forensics for RAW camera image formats
Recent research in multimedia forensics has developed a variety of methods to detect image tampering and to identify the origin of image files. Many of these techniques are based on characteristics in the JPEG format, as it is the most used file format for digital images. In recent years RAW image formats have gained popularity among amateur and professional photographers. This increase in their use and possible misuse makes these file formats an important subject to file forensic examinations. The aim of this paper is to explore to which extend methods previously developed for images in JPEG format can be applied to RAW image formats
Camera-based Image Forgery Localization using Convolutional Neural Networks
Camera fingerprints are precious tools for a number of image forensics tasks.
A well-known example is the photo response non-uniformity (PRNU) noise pattern,
a powerful device fingerprint. Here, to address the image forgery localization
problem, we rely on noiseprint, a recently proposed CNN-based camera model
fingerprint. The CNN is trained to minimize the distance between same-model
patches, and maximize the distance otherwise. As a result, the noiseprint
accounts for model-related artifacts just like the PRNU accounts for
device-related non-uniformities. However, unlike the PRNU, it is only mildly
affected by residuals of high-level scene content. The experiments show that
the proposed noiseprint-based forgery localization method improves over the
PRNU-based reference
Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries
With advanced image journaling tools, one can easily alter the semantic
meaning of an image by exploiting certain manipulation techniques such as
copy-clone, object splicing, and removal, which mislead the viewers. In
contrast, the identification of these manipulations becomes a very challenging
task as manipulated regions are not visually apparent. This paper proposes a
high-confidence manipulation localization architecture which utilizes
resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder
network to segment out manipulated regions from non-manipulated ones.
Resampling features are used to capture artifacts like JPEG quality loss,
upsampling, downsampling, rotation, and shearing. The proposed network exploits
larger receptive fields (spatial maps) and frequency domain correlation to
analyze the discriminative characteristics between manipulated and
non-manipulated regions by incorporating encoder and LSTM network. Finally,
decoder network learns the mapping from low-resolution feature maps to
pixel-wise predictions for image tamper localization. With predicted mask
provided by final layer (softmax) of the proposed architecture, end-to-end
training is performed to learn the network parameters through back-propagation
using ground-truth masks. Furthermore, a large image splicing dataset is
introduced to guide the training process. The proposed method is capable of
localizing image manipulations at pixel level with high precision, which is
demonstrated through rigorous experimentation on three diverse datasets
Provenance analysis for instagram photos
As a feasible device fingerprint, sensor pattern noise (SPN) has been proven to be effective in the provenance analysis of digital images. However, with the rise of social media, millions of images are being uploaded to and shared through social media sites every day. An image downloaded from social networks may have gone through a series of unknown image manipulations. Consequently, the trustworthiness of SPN has been challenged in the provenance analysis of the images downloaded from social media platforms. In this paper, we intend to investigate the effects of the pre-defined Instagram images filters on the SPN-based image provenance analysis. We identify two groups of filters that affect the SPN in quite different ways, with Group I consisting of the filters that severely attenuate the SPN and Group II consisting of the filters that well preserve the SPN in the images. We further propose a CNN-based classifier to perform filter-oriented image categorization, aiming to exclude the images manipulated by the filters in Group I and thus improve the reliability of the SPN-based provenance analysis. The results on about 20, 000 images and 18 filters are very promising, with an accuracy higher than 96% in differentiating the filters in Group I and Group II
An In-Depth Study on Open-Set Camera Model Identification
Camera model identification refers to the problem of linking a picture to the
camera model used to shoot it. As this might be an enabling factor in different
forensic applications to single out possible suspects (e.g., detecting the
author of child abuse or terrorist propaganda material), many accurate camera
model attribution methods have been developed in the literature. One of their
main drawbacks, however, is the typical closed-set assumption of the problem.
This means that an investigated photograph is always assigned to one camera
model within a set of known ones present during investigation, i.e., training
time, and the fact that the picture can come from a completely unrelated camera
model during actual testing is usually ignored. Under realistic conditions, it
is not possible to assume that every picture under analysis belongs to one of
the available camera models. To deal with this issue, in this paper, we present
the first in-depth study on the possibility of solving the camera model
identification problem in open-set scenarios. Given a photograph, we aim at
detecting whether it comes from one of the known camera models of interest or
from an unknown one. We compare different feature extraction algorithms and
classifiers specially targeting open-set recognition. We also evaluate possible
open-set training protocols that can be applied along with any open-set
classifier, observing that a simple of those alternatives obtains best results.
Thorough testing on independent datasets shows that it is possible to leverage
a recently proposed convolutional neural network as feature extractor paired
with a properly trained open-set classifier aiming at solving the open-set
camera model attribution problem even to small-scale image patches, improving
over state-of-the-art available solutions.Comment: Published through IEEE Access journa
- …