34 research outputs found
On the Effectiveness of Image Manipulation Detection in the Age of Social Media
Image manipulation detection algorithms designed to identify local anomalies
often rely on the manipulated regions being ``sufficiently'' different from the
rest of the non-tampered regions in the image. However, such anomalies might
not be easily identifiable in high-quality manipulations, and their use is
often based on the assumption that certain image phenomena are associated with
the use of specific editing tools. This makes the task of manipulation
detection hard in and of itself, with state-of-the-art detectors only being
able to detect a limited number of manipulation types. More importantly, in
cases where the anomaly assumption does not hold, the detection of false
positives in otherwise non-manipulated images becomes a serious problem.
To understand the current state of manipulation detection, we present an
in-depth analysis of deep learning-based and learning-free methods, assessing
their performance on different benchmark datasets containing tampered and
non-tampered samples. We provide a comprehensive study of their suitability for
detecting different manipulations as well as their robustness when presented
with non-tampered data. Furthermore, we propose a novel deep learning-based
pre-processing technique that accentuates the anomalies present in manipulated
regions to make them more identifiable by a variety of manipulation detection
methods. To this end, we introduce an anomaly enhancement loss that, when used
with a residual architecture, improves the performance of different detection
algorithms with a minimal introduction of false positives on the
non-manipulated data.
Lastly, we introduce an open-source manipulation detection toolkit comprising
a number of standard detection algorithms
Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries
With advanced image journaling tools, one can easily alter the semantic
meaning of an image by exploiting certain manipulation techniques such as
copy-clone, object splicing, and removal, which mislead the viewers. In
contrast, the identification of these manipulations becomes a very challenging
task as manipulated regions are not visually apparent. This paper proposes a
high-confidence manipulation localization architecture which utilizes
resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder
network to segment out manipulated regions from non-manipulated ones.
Resampling features are used to capture artifacts like JPEG quality loss,
upsampling, downsampling, rotation, and shearing. The proposed network exploits
larger receptive fields (spatial maps) and frequency domain correlation to
analyze the discriminative characteristics between manipulated and
non-manipulated regions by incorporating encoder and LSTM network. Finally,
decoder network learns the mapping from low-resolution feature maps to
pixel-wise predictions for image tamper localization. With predicted mask
provided by final layer (softmax) of the proposed architecture, end-to-end
training is performed to learn the network parameters through back-propagation
using ground-truth masks. Furthermore, a large image splicing dataset is
introduced to guide the training process. The proposed method is capable of
localizing image manipulations at pixel level with high precision, which is
demonstrated through rigorous experimentation on three diverse datasets
Visual and Textual Analysis for Image Trustworthiness Assessment within Online News
The majority of news published online presents one or more images or videos, which make the news more easily consumed and therefore more attractive to huge audiences. As a consequence, news with catchy multimedia content can be spread and get viral extremely quickly. Unfortunately, the availability and sophistication of photo editing software are erasing the line between pristine and manipulated content. Given that images have the power of bias and influence the opinion and behavior of readers, the need of automatic techniques to assess the authenticity of images is straightforward. This paper aims at detecting images published within online news that have either been maliciously modified or that do not represent accurately the event the news is mentioning. The proposed approach composes image forensic algorithms for detecting image tampering, and textual analysis as a verifier of images that are misaligned to textual content. Furthermore, textual analysis can be considered as a complementary source of information supporting image forensics techniques when they falsely detect or falsely ignore image tampering due to heavy image postprocessing. The devised method is tested on three datasets. The performance on the first two shows interesting results, with F1-score generally higher than 75%. The third dataset has an exploratory intent; in fact, although showing that the methodology is not ready for completely unsupervised scenarios, it is possible to investigate possible problems and controversial cases that might arise in real-world scenarios
Spotting the difference: Context retrieval and analysis for improved forgery detection and localization
As image tampering becomes ever more sophisticated and commonplace, the need for image forensics algorithms that can accurately and quickly detect forgeries grows. In this paper, we revisit the ideas of image querying and retrieval to provide clues to better localize forgeries. We propose a method to perform large-scale image forensics on the order of one million images using the help of an image search algorithm and database to gather contextual clues as to where tampering may have taken place. In this vein, we introduce five new strongly invariant image comparison methods and test their effectiveness under heavy noise, rotation, and color space changes. Lastly, we show the effectiveness of these methods compared to passive image forensics using Nimble [1], a new, state-of-the-art dataset from the National Institute of Standards and Technology (NIST)