Text-to-image generation models represent the next step of evolution in image
synthesis, offering a natural way to achieve flexible yet fine-grained control
over the result. One emerging area of research is the fast adaptation of large
text-to-image models to smaller datasets or new visual concepts. However, many
efficient methods of adaptation have a long training time, which limits their
practical applications, slows down research experiments, and spends excessive
GPU resources. In this work, we study the training dynamics of popular
text-to-image personalization methods (such as Textual Inversion or
DreamBooth), aiming to speed them up. We observe that most concepts are learned
at early stages and do not improve in quality later, but standard model
convergence metrics fail to indicate that. Instead, we propose a simple drop-in
early stopping criterion that only requires computing the regular training
objective on a fixed set of inputs for all training iterations. Our experiments
on Stable Diffusion for a range of concepts and for three personalization
methods demonstrate the competitive performance of our approach, making
adaptation up to 8 times faster with no significant drops in quality.Comment: Code: https://github.com/yandex-research/DVAR. 19 pages, 14 figure