119 research outputs found
Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries
With advanced image journaling tools, one can easily alter the semantic
meaning of an image by exploiting certain manipulation techniques such as
copy-clone, object splicing, and removal, which mislead the viewers. In
contrast, the identification of these manipulations becomes a very challenging
task as manipulated regions are not visually apparent. This paper proposes a
high-confidence manipulation localization architecture which utilizes
resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder
network to segment out manipulated regions from non-manipulated ones.
Resampling features are used to capture artifacts like JPEG quality loss,
upsampling, downsampling, rotation, and shearing. The proposed network exploits
larger receptive fields (spatial maps) and frequency domain correlation to
analyze the discriminative characteristics between manipulated and
non-manipulated regions by incorporating encoder and LSTM network. Finally,
decoder network learns the mapping from low-resolution feature maps to
pixel-wise predictions for image tamper localization. With predicted mask
provided by final layer (softmax) of the proposed architecture, end-to-end
training is performed to learn the network parameters through back-propagation
using ground-truth masks. Furthermore, a large image splicing dataset is
introduced to guide the training process. The proposed method is capable of
localizing image manipulations at pixel level with high precision, which is
demonstrated through rigorous experimentation on three diverse datasets
Recent Advances in Digital Image and Video Forensics, Anti-forensics and Counter Anti-forensics
Image and video forensics have recently gained increasing attention due to
the proliferation of manipulated images and videos, especially on social media
platforms, such as Twitter and Instagram, which spread disinformation and fake
news. This survey explores image and video identification and forgery detection
covering both manipulated digital media and generative media. However, media
forgery detection techniques are susceptible to anti-forensics; on the other
hand, such anti-forensics techniques can themselves be detected. We therefore
further cover both anti-forensics and counter anti-forensics techniques in
image and video. Finally, we conclude this survey by highlighting some open
problems in this domain
AHP validated literature review of forgery type dependent passive image forgery detection with explainable AI
Nowadays, a lot of significance is given to what we read today: newspapers, magazines, news channels, and internet media, such as leading social networking sites like Facebook, Instagram, and Twitter. These are the primary wellsprings of phony news and are frequently utilized in malignant manners, for example, for horde incitement. In the recent decade, a tremendous increase in image information generation is happening due to the massive use of social networking services. Various image editing software like Skylum Luminar, Corel PaintShop Pro, Adobe Photoshop, and many others are used to create, modify the images and videos, are significant concerns. A lot of earlier work of forgery detection was focused on traditional methods to solve the forgery detection. Recently, Deep learning algorithms have accomplished high-performance accuracies in the image processing domain, such as image classification and face recognition. Experts have applied deep learning techniques to detect a forgery in the image too. However, there is a real need to explain why the image is categorized under forged to understand the algorithm’s validity; this explanation helps in mission-critical applications like forensic. Explainable AI (XAI) algorithms have been used to interpret a black box’s decision in various cases. This paper contributes a survey on image forgery detection with deep learning approaches. It also focuses on the survey of explainable AI for images
VideoFACT: Detecting Video Forgeries Using Attention, Scene Context, and Forensic Traces
Fake videos represent an important misinformation threat. While existing
forensic networks have demonstrated strong performance on image forgeries,
recent results reported on the Adobe VideoSham dataset show that these networks
fail to identify fake content in videos. In this paper, we show that this is
due to video coding, which introduces local variation into forensic traces. In
response, we propose VideoFACT - a new network that is able to detect and
localize a wide variety of video forgeries and manipulations. To overcome
challenges that existing networks face when analyzing videos, our network
utilizes both forensic embeddings to capture traces left by manipulation,
context embeddings to control for variation in forensic traces introduced by
video coding, and a deep self-attention mechanism to estimate the quality and
relative importance of local forensic embeddings. We create several new video
forgery datasets and use these, along with publicly available data, to
experimentally evaluate our network's performance. These results show that our
proposed network is able to identify a diverse set of video forgeries,
including those not encountered during training. Furthermore, we show that our
network can be fine-tuned to achieve even stronger performance on challenging
AI-based manipulations
- …