16 research outputs found
Open Set Synthetic Image Source Attribution
AI-generated images have become increasingly realistic and have garnered
significant public attention. While synthetic images are intriguing due to
their realism, they also pose an important misinformation threat. To address
this new threat, researchers have developed multiple algorithms to detect
synthetic images and identify their source generators. However, most existing
source attribution techniques are designed to operate in a closed-set scenario,
i.e. they can only be used to discriminate between known image generators. By
contrast, new image-generation techniques are rapidly emerging. To contend with
this, there is a great need for open-set source attribution techniques that can
identify when synthetic images have originated from new, unseen generators. To
address this problem, we propose a new metric learning-based approach. Our
technique works by learning transferrable embeddings capable of discriminating
between generators, even when they are not seen during training. An image is
first assigned to a candidate generator, then is accepted or rejected based on
its distance in the embedding space from known generators' learned reference
points. Importantly, we identify that initializing our source attribution
embedding network by pretraining it on image camera identification can improve
our embeddings' transferability. Through a series of experiments, we
demonstrate our approach's ability to attribute the source of synthetic images
in open-set scenarios
How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals
Fake portrait video generation techniques have been posing a new threat to
the society with photorealistic deep fakes for political propaganda, celebrity
imitation, forged evidences, and other identity related manipulations.
Following these generation techniques, some detection approaches have also been
proved useful due to their high classification accuracy. Nevertheless, almost
no effort was spent to track down the source of deep fakes. We propose an
approach not only to separate deep fakes from real videos, but also to discover
the specific generative model behind a deep fake. Some pure deep learning based
approaches try to classify deep fakes using CNNs where they actually learn the
residuals of the generator. We believe that these residuals contain more
information and we can reveal these manipulation artifacts by disentangling
them with biological signals. Our key observation yields that the
spatiotemporal patterns in biological signals can be conceived as a
representative projection of residuals. To justify this observation, we extract
PPG cells from real and fake videos and feed these to a state-of-the-art
classification network for detecting the generative model per video. Our
results indicate that our approach can detect fake videos with 97.29% accuracy,
and the source model with 93.39% accuracy.Comment: To be published in the proceedings of 2020 IEEE/IAPR International
Joint Conference on Biometrics (IJCB
Single-Model Attribution of Generative Models Through Final-Layer Inversion
Recent groundbreaking developments on generative modeling have sparked
interest in practical single-model attribution. Such methods predict whether a
sample was generated by a specific generator or not, for instance, to prove
intellectual property theft. However, previous works are either limited to the
closed-world setting or require undesirable changes of the generative model. We
address these shortcomings by proposing FLIPAD, a new approach for single-model
attribution in the open-world setting based on final-layer inversion and
anomaly detection. We show that the utilized final-layer inversion can be
reduced to a convex lasso optimization problem, making our approach
theoretically sound and computationally efficient. The theoretical findings are
accompanied by an experimental study demonstrating the effectiveness of our
approach, outperforming the existing methods