16 research outputs found
Do GANs leave artificial fingerprints?
In the last few years, generative adversarial networks (GAN) have shown
tremendous potential for a number of applications in computer vision and
related fields. With the current pace of progress, it is a sure bet they will
soon be able to generate high-quality images and videos, virtually
indistinguishable from real ones. Unfortunately, realistic GAN-generated images
pose serious threats to security, to begin with a possible flood of fake
multimedia, and multimedia forensic countermeasures are in urgent need. In this
work, we show that each GAN leaves its specific fingerprint in the images it
generates, just like real-world cameras mark acquired images with traces of
their photo-response non-uniformity pattern. Source identification experiments
with several popular GANs show such fingerprints to represent a precious asset
for forensic analyses
Qualitative Failures of Image Generation Models and Their Application in Detecting Deepfakes
The ability of image and video generation models to create photorealistic
images has reached unprecedented heights, making it difficult to distinguish
between real and fake images in many cases. However, despite this progress, a
gap remains between the quality of generated images and those found in the real
world. To address this, we have reviewed a vast body of literature from both
academic publications and social media to identify qualitative shortcomings in
image generation models, which we have classified into five categories. By
understanding these failures, we can identify areas where these models need
improvement, as well as develop strategies for detecting deep fakes. The
prevalence of deep fakes in today's society is a serious concern, and our
findings can help mitigate their negative impact