Text-to-image (T2I) diffusion models (DMs) have shown promise in generating
high-quality images from textual descriptions. The real-world applications of
these models require particular attention to their safety and fidelity, but
this has not been sufficiently explored. One fundamental question is whether
existing T2I DMs are robust against variations over input texts. To answer it,
this work provides the first robustness evaluation of T2I DMs against
real-world attacks. Unlike prior studies that focus on malicious attacks
involving apocryphal alterations to the input texts, we consider an attack
space spanned by realistic errors (e.g., typo, glyph, phonetic) that humans can
make, to ensure semantic consistency. Given the inherent randomness of the
generation process, we develop novel distribution-based attack objectives to
mislead T2I DMs. We perform attacks in a black-box manner without any knowledge
of the model. Extensive experiments demonstrate the effectiveness of our method
for attacking popular T2I DMs and simultaneously reveal their non-trivial
robustness issues. Moreover, we provide an in-depth analysis of our method to
show that it is not designed to attack the text encoder in T2I DMs solely