While text-to-image synthesis currently enjoys great popularity among
researchers and the general public, the security of these models has been
neglected so far. Many text-guided image generation models rely on pre-trained
text encoders from external sources, and their users trust that the retrieved
models will behave as promised. Unfortunately, this might not be the case. We
introduce backdoor attacks against text-guided generative models and
demonstrate that their text encoders pose a major tampering risk. Our attacks
only slightly alter an encoder so that no suspicious model behavior is apparent
for image generations with clean prompts. By then inserting a single non-Latin
character into the prompt, the adversary can trigger the model to either
generate images with pre-defined attributes or images following a hidden,
potentially malicious description. We empirically demonstrate the high
effectiveness of our attacks on Stable Diffusion and highlight that the
injection process of a single backdoor takes less than two minutes. Besides
phrasing our approach solely as an attack, it can also force an encoder to
forget phrases related to certain concepts, such as nudity or violence, and
help to make image generation safer.Comment: 25 pages, 16 figures, 5 table