927 research outputs found
A PatchMatch-based Dense-field Algorithm for Video Copy-Move Detection and Localization
We propose a new algorithm for the reliable detection and localization of
video copy-move forgeries. Discovering well crafted video copy-moves may be
very difficult, especially when some uniform background is copied to occlude
foreground objects. To reliably detect both additive and occlusive copy-moves
we use a dense-field approach, with invariant features that guarantee
robustness to several post-processing operations. To limit complexity, a
suitable video-oriented version of PatchMatch is used, with a multiresolution
search strategy, and a focus on volumes of interest. Performance assessment
relies on a new dataset, designed ad hoc, with realistic copy-moves and a wide
variety of challenging situations. Experimental results show the proposed
method to detect and localize video copy-moves with good accuracy even in
adverse conditions
Learning from Jesus’ Wife: What Does Forgery Have to Do with the Digital Humanities?
McGrath’s chapter on the so-called Gospel of Jesus’ Wife sets aside as settled the question of the papyrus’ authenticity, and explores instead what we can learn about the Digital Humanities and scholarly interaction in a digital era from the way the discussions and investigations of that work unfolded, and how issues that arose were handled. As news of purported new finds can spread around the globe instantaneously facilitated by current technology and social media, how can academics utilize similar technology to evaluate authenticity, but even more importantly, inform the broader public about the importance of provenance, and the need for skepticism towards finds that appear via the antiquities market
Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries
With advanced image journaling tools, one can easily alter the semantic
meaning of an image by exploiting certain manipulation techniques such as
copy-clone, object splicing, and removal, which mislead the viewers. In
contrast, the identification of these manipulations becomes a very challenging
task as manipulated regions are not visually apparent. This paper proposes a
high-confidence manipulation localization architecture which utilizes
resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder
network to segment out manipulated regions from non-manipulated ones.
Resampling features are used to capture artifacts like JPEG quality loss,
upsampling, downsampling, rotation, and shearing. The proposed network exploits
larger receptive fields (spatial maps) and frequency domain correlation to
analyze the discriminative characteristics between manipulated and
non-manipulated regions by incorporating encoder and LSTM network. Finally,
decoder network learns the mapping from low-resolution feature maps to
pixel-wise predictions for image tamper localization. With predicted mask
provided by final layer (softmax) of the proposed architecture, end-to-end
training is performed to learn the network parameters through back-propagation
using ground-truth masks. Furthermore, a large image splicing dataset is
introduced to guide the training process. The proposed method is capable of
localizing image manipulations at pixel level with high precision, which is
demonstrated through rigorous experimentation on three diverse datasets
Exposing image forgery by detecting traces of feather operation
Powerful digital image editing tools make it very easy to produce a perfect image forgery. The feather operation is necessary when tampering an image by copy–paste operation because it can help the boundary of pasted object to blend smoothly and unobtrusively with its surroundings. We propose a blind technique capable of detecting traces of feather operation to expose image forgeries. We model the feather operation, and the pixels of feather region will present similarity in their gradient phase angle and feather radius. An effectual scheme is designed to estimate each feather region pixel׳s gradient phase angle and feather radius, and the pixel׳s similarity to its neighbor pixels is defined and used to distinguish the feathered pixels from un-feathered pixels. The degree of image credibility is defined, and it is more acceptable to evaluate the reality of one image than just using a decision of YES or NO. Results of experiments on several forgeries demonstrate the effectiveness of the technique
A survey on passive digital video forgery detection techniques
Digital media devices such as smartphones, cameras, and notebooks are becoming increasingly popular. Through digital platforms such as Facebook, WhatsApp, Twitter, and others, people share digital images, videos, and audio in large quantities. Especially in a crime scene investigation, digital evidence plays a crucial role in a courtroom. Manipulating video content with high-quality software tools is easier, which helps fabricate video content more efficiently. It is therefore necessary to develop an authenticating method for detecting and verifying manipulated videos. The objective of this paper is to provide a comprehensive review of the passive methods for detecting video forgeries. This survey has the primary goal of studying and analyzing the existing passive techniques for detecting video forgeries. First, an overview of the basic information needed to understand video forgery detection is presented. Later, it provides an in-depth understanding of the techniques used in the spatial, temporal, and spatio-temporal domain analysis of videos, datasets used, and their limitations are reviewed. In the following sections, standard benchmark video forgery datasets and the generalized architecture for passive video forgery detection techniques are discussed in more depth. Finally, identifying loopholes in existing surveys so detecting forged videos much more effectively in the future are discussed
- …