Notwithstanding offering convenience and entertainment to society, Deepfake
face swapping has caused critical privacy issues with the rapid development of
deep generative models. Due to imperceptible artifacts in high-quality
synthetic images, passive detection models against face swapping in recent
years usually suffer performance damping regarding the generalizability issue.
Therefore, several studies have been attempted to proactively protect the
original images against malicious manipulations by inserting invisible signals
in advance. However, the existing proactive defense approaches demonstrate
unsatisfactory results with respect to visual quality, detection accuracy, and
source tracing ability. In this study, we propose the first robust identity
perceptual watermarking framework that concurrently performs detection and
source tracing against Deepfake face swapping proactively. We assign identity
semantics regarding the image contents to the watermarks and devise an
unpredictable and unreversible chaotic encryption system to ensure watermark
confidentiality. The watermarks are encoded and recovered by jointly training
an encoder-decoder framework along with adversarial image manipulations.
Extensive experiments demonstrate state-of-the-art performance against Deepfake
face swapping under both cross-dataset and cross-manipulation settings.Comment: Submitted for revie