Many fine-grained classification tasks, like rare animal identification, have
limited training data and consequently classifiers trained on these datasets
often fail to generalize to variations in the domain like changes in weather or
location. As such, we explore how natural language descriptions of the domains
seen in training data can be used with large vision models trained on diverse
pretraining datasets to generate useful variations of the training data. We
introduce ALIA (Automated Language-guided Image Augmentation), a method which
utilizes large vision and language models to automatically generate natural
language descriptions of a dataset's domains and augment the training data via
language-guided image editing. To maintain data integrity, a model trained on
the original dataset filters out minimal image edits and those which corrupt
class-relevant information. The resulting dataset is visually consistent with
the original training data and offers significantly enhanced diversity. We show
that ALIA is able to surpasses traditional data augmentation and text-to-image
generated data on fine-grained classification tasks, including cases of domain
generalization and contextual bias. Code is available at
https://github.com/lisadunlap/ALIA.Comment: Update: replaced Planes dataset with Waterbirds & updated results
after bug fi