Diffusion model based Text-to-Image has achieved impressive achievements
recently. Although current technology for synthesizing images is highly
advanced and capable of generating images with high fidelity, it is still
possible to give the show away when focusing on the text area in the generated
image. To address this issue, we introduce AnyText, a diffusion-based
multilingual visual text generation and editing model, that focuses on
rendering accurate and coherent text in the image. AnyText comprises a
diffusion pipeline with two primary elements: an auxiliary latent module and a
text embedding module. The former uses inputs like text glyph, position, and
masked image to generate latent features for text generation or editing. The
latter employs an OCR model for encoding stroke data as embeddings, which blend
with image caption embeddings from the tokenizer to generate texts that
seamlessly integrate with the background. We employed text-control diffusion
loss and text perceptual loss for training to further enhance writing accuracy.
AnyText can write characters in multiple languages, to the best of our
knowledge, this is the first work to address multilingual visual text
generation. It is worth mentioning that AnyText can be plugged into existing
diffusion models from the community for rendering or editing text accurately.
After conducting extensive evaluation experiments, our method has outperformed
all other approaches by a significant margin. Additionally, we contribute the
first large-scale multilingual text images dataset, AnyWord-3M, containing 3
million image-text pairs with OCR annotations in multiple languages. Based on
AnyWord-3M dataset, we propose AnyText-benchmark for the evaluation of visual
text generation accuracy and quality. Our project will be open-sourced on
https://github.com/tyxsspa/AnyText to improve and promote the development of
text generation technology