Text-to-image diffusion models are nothing but a revolution, allowing anyone,
even without design skills, to create realistic images from simple text inputs.
With powerful personalization tools like DreamBooth, they can generate images
of a specific person just by learning from his/her few reference images.
However, when misused, such a powerful and convenient tool can produce fake
news or disturbing content targeting any individual victim, posing a severe
negative social impact. In this paper, we explore a defense system called
Anti-DreamBooth against such malicious use of DreamBooth. The system aims to
add subtle noise perturbation to each user's image before publishing in order
to disrupt the generation quality of any DreamBooth model trained on these
perturbed images. We investigate a wide range of algorithms for perturbation
optimization and extensively evaluate them on two facial datasets over various
text-to-image model versions. Despite the complicated formulation of DreamBooth
and Diffusion-based text-to-image models, our methods effectively defend users
from the malicious use of those models. Their effectiveness withstands even
adverse conditions, such as model or prompt/term mismatching between training
and testing. Our code will be available at
\href{https://github.com/VinAIResearch/Anti-DreamBooth.git}{https://github.com/VinAIResearch/Anti-DreamBooth.git}.Comment: Project page: https://anti-dreambooth.github.io