Backdoor attacks inject poisoned samples into the training data, resulting in
the misclassification of the poisoned input during a model's deployment.
Defending against such attacks is challenging, especially for real-world
black-box models where only query access is permitted. In this paper, we
propose a novel defense framework against backdoor attacks through Zero-shot
Image Purification (ZIP). Our framework can be applied to poisoned models
without requiring internal information about the model or any prior knowledge
of the clean/poisoned samples. Our defense framework involves two steps. First,
we apply a linear transformation (e.g., blurring) on the poisoned image to
destroy the backdoor pattern. Then, we use a pre-trained diffusion model to
recover the missing semantic information removed by the transformation. In
particular, we design a new reverse process by using the transformed image to
guide the generation of high-fidelity purified images, which works in zero-shot
settings. We evaluate our ZIP framework on multiple datasets with different
types of attacks. Experimental results demonstrate the superiority of our ZIP
framework compared to state-of-the-art backdoor defense baselines. We believe
that our results will provide valuable insights for future defense methods for
black-box models. Our code is available at https://github.com/sycny/ZIP.Comment: Accepted by NeurIPS 202