2,026 research outputs found
An Evaluation of Digital Image Forgery Detection Approaches
With the headway of the advanced image handling software and altering tools,
a computerized picture can be effectively controlled. The identification of
image manipulation is vital in light of the fact that an image can be utilized
as legitimate confirmation, in crime scene investigation, and in numerous
different fields. The image forgery detection techniques intend to confirm the
credibility of computerized pictures with no prior information about the
original image. There are numerous routes for altering a picture, for example,
resampling, splicing, and copy-move. In this paper, we have examined different
type of image forgery and their detection techniques; mainly we focused on
pixel based image forgery detection techniques
Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries
With advanced image journaling tools, one can easily alter the semantic
meaning of an image by exploiting certain manipulation techniques such as
copy-clone, object splicing, and removal, which mislead the viewers. In
contrast, the identification of these manipulations becomes a very challenging
task as manipulated regions are not visually apparent. This paper proposes a
high-confidence manipulation localization architecture which utilizes
resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder
network to segment out manipulated regions from non-manipulated ones.
Resampling features are used to capture artifacts like JPEG quality loss,
upsampling, downsampling, rotation, and shearing. The proposed network exploits
larger receptive fields (spatial maps) and frequency domain correlation to
analyze the discriminative characteristics between manipulated and
non-manipulated regions by incorporating encoder and LSTM network. Finally,
decoder network learns the mapping from low-resolution feature maps to
pixel-wise predictions for image tamper localization. With predicted mask
provided by final layer (softmax) of the proposed architecture, end-to-end
training is performed to learn the network parameters through back-propagation
using ground-truth masks. Furthermore, a large image splicing dataset is
introduced to guide the training process. The proposed method is capable of
localizing image manipulations at pixel level with high precision, which is
demonstrated through rigorous experimentation on three diverse datasets
Exposing DeepFake Videos By Detecting Face Warping Artifacts
In this work, we describe a new deep learning based method that can
effectively distinguish AI-generated fake videos (referred to as {\em DeepFake}
videos hereafter) from real videos. Our method is based on the observations
that current DeepFake algorithm can only generate images of limited
resolutions, which need to be further warped to match the original faces in the
source video. Such transforms leave distinctive artifacts in the resulting
DeepFake videos, and we show that they can be effectively captured by
convolutional neural networks (CNNs). Compared to previous methods which use a
large amount of real and DeepFake generated images to train CNN classifier, our
method does not need DeepFake generated images as negative training examples
since we target the artifacts in affine face warping as the distinctive feature
to distinguish real and fake images. The advantages of our method are two-fold:
(1) Such artifacts can be simulated directly using simple image processing
operations on a image to make it as negative example. Since training a DeepFake
model to generate negative examples is time-consuming and resource-demanding,
our method saves a plenty of time and resources in training data collection;
(2) Since such artifacts are general existed in DeepFake videos from different
sources, our method is more robust compared to others. Our method is evaluated
on two sets of DeepFake video datasets for its effectiveness in practice.Comment: CVPRW 201
Fighting Fake News: Image Splice Detection via Learned Self-Consistency
Advances in photo editing and manipulation tools have made it significantly
easier to create fake imagery. Learning to detect such manipulations, however,
remains a challenging problem due to the lack of sufficient amounts of
manipulated training data. In this paper, we propose a learning algorithm for
detecting visual image manipulations that is trained only using a large dataset
of real photographs. The algorithm uses the automatically recorded photo EXIF
metadata as supervisory signal for training a model to determine whether an
image is self-consistent -- that is, whether its content could have been
produced by a single imaging pipeline. We apply this self-consistency model to
the task of detecting and localizing image splices. The proposed method obtains
state-of-the-art performance on several image forensics benchmarks, despite
never seeing any manipulated images at training. That said, it is merely a step
in the long quest for a truly general purpose visual forensics tool
multi-patch aggregation models for resampling detection
Images captured nowadays are of varying dimensions with smartphones and
DSLR's allowing users to choose from a list of available image resolutions. It
is therefore imperative for forensic algorithms such as resampling detection to
scale well for images of varying dimensions. However, in our experiments, we
observed that many state-of-the-art forensic algorithms are sensitive to image
size and their performance quickly degenerates when operated on images of
diverse dimensions despite re-training them using multiple image sizes. To
handle this issue, we propose a novel pooling strategy called ITERATIVE
POOLING. This pooling strategy can dynamically adjust input tensors in a
discrete without much loss of information as in ROI Max-pooling. This pooling
strategy can be used with any of the existing deep models and for demonstration
purposes, we show its utility on Resnet-18 for the case of resampling detection
a fundamental operation for any image sought of image manipulation. Compared to
existing strategies and Max-pooling it gives up to 7-8% improvement on public
datasets.Comment: 6 pages; 6 tables; 4 figure
Grids and the Virtual Observatory
We consider several projects from astronomy that benefit from the Grid paradigm and
associated technology, many of which involve either massive datasets or the federation
of multiple datasets. We cover image computation (mosaicking, multi-wavelength
images, and synoptic surveys); database computation (representation through XML,
data mining, and visualization); and semantic interoperability (publishing, ontologies,
directories, and service descriptions)
High Dimensional Data Modeling Techniques for Detection of Chemical Plumes and Anomalies in Hyperspectral Images and Movies
We briefly review recent progress in techniques for modeling and analyzing
hyperspectral images and movies, in particular for detecting plumes of both
known and unknown chemicals. For detecting chemicals of known spectrum, we
extend the technique of using a single subspace for modeling the background to
a "mixture of subspaces" model to tackle more complicated background.
Furthermore, we use partial least squares regression on a resampled training
set to boost performance. For the detection of unknown chemicals we view the
problem as an anomaly detection problem, and use novel estimators with
low-sampled complexity for intrinsically low-dimensional data in
high-dimensions that enable us to model the "normal" spectra and detect
anomalies. We apply these algorithms to benchmark data sets made available by
the Automated Target Detection program co-funded by NSF, DTRA and NGA, and
compare, when applicable, to current state-of-the-art algorithms, with
favorable results
Comparision and analysis of photo image forgery detection techniques
Digital Photo images are everywhere, on the covers of magazines, in
newspapers, in courtrooms, and all over the Internet. We are exposed to them
throughout the day and most of the time. Ease with which images can be
manipulated; we need to be aware that seeing does not always imply believing.
We propose methodologies to identify such unbelievable photo images and
succeeded to identify forged region by given only the forged image. Formats are
additive tag for every file system and contents are relatively expressed with
extension based on most popular digital camera uses JPEG and Other image
formats like png, bmp etc. We have designed algorithm running behind with the
concept of abnormal anomalies and identify the forgery regions.Comment: 12 pages, International Journal on Computational Sciences &
Applications (IJCSA) Vo2, No.6, December 201
Resampling detection of recompressed images via dual-stream convolutional neural network
Resampling detection plays an important role in identifying image tampering,
such as image splicing. Currently, the resampling detection is still difficult
in recompressed images, which are yielded by applying resampling followed by
post-JPEG compression to primary JPEG images. Except for the scenario of low
quality primary compression, it remains rather challenging due to the
widespread use of middle/high quality compression in imaging devices. In this
paper, we propose a new convolution neural network (CNN) method to learn the
resampling trace features directly from the recompressed images. To this end, a
noise extraction layer based on low-order high pass filters is deployed to
yield the image residual domain, which is more beneficial to extract
manipulation trace features. A dual-stream CNN is presented to capture the
resampling trails along different directions, where the horizontal and vertical
streams are interleaved and concatenated. Lastly, the learned features are fed
into Sigmoid/Softmax layer, which acts as a binary/multiple classifier for
achieving the blind detection and parameter estimation of resampling,
respectively. Extensive experimental results demonstrate that our proposed
method could detect resampling effectively in recompressed images and
outperform the state-of-the-art detectors
Content Authentication for Neural Imaging Pipelines: End-to-end Optimization of Photo Provenance in Complex Distribution Channels
Forensic analysis of digital photo provenance relies on intrinsic traces left
in the photograph at the time of its acquisition. Such analysis becomes
unreliable after heavy post-processing, such as down-sampling and
re-compression applied upon distribution in the Web. This paper explores
end-to-end optimization of the entire image acquisition and distribution
workflow to facilitate reliable forensic analysis at the end of the
distribution channel. We demonstrate that neural imaging pipelines can be
trained to replace the internals of digital cameras, and jointly optimized for
high-fidelity photo development and reliable provenance analysis. In our
experiments, the proposed approach increased image manipulation detection
accuracy from 45% to over 90%. The findings encourage further research towards
building more reliable imaging pipelines with explicit provenance-guaranteeing
properties.Comment: Camera ready + supplement, CVPR'1
- …