Diffusion models (DM) have become state-of-the-art generative models because
of their capability to generate high-quality images from noises without
adversarial training. However, they are vulnerable to backdoor attacks as
reported by recent studies. When a data input (e.g., some Gaussian noise) is
stamped with a trigger (e.g., a white patch), the backdoored model always
generates the target image (e.g., an improper photo). However, effective
defense strategies to mitigate backdoors from DMs are underexplored. To bridge
this gap, we propose the first backdoor detection and removal framework for
DMs. We evaluate our framework Elijah on hundreds of DMs of 3 types including
DDPM, NCSN and LDM, with 13 samplers against 3 existing backdoor attacks.
Extensive experiments show that our approach can have close to 100% detection
accuracy and reduce the backdoor effects to close to zero without significantly
sacrificing the model utility.Comment: AAAI 202