To be successful in single source domain generalization, maximizing diversity
of synthesized domains has emerged as one of the most effective strategies.
Many of the recent successes have come from methods that pre-specify the types
of diversity that a model is exposed to during training, so that it can
ultimately generalize well to new domains. However, na\"ive diversity based
augmentations do not work effectively for domain generalization either because
they cannot model large domain shift, or because the span of transforms that
are pre-specified do not cover the types of shift commonly occurring in domain
generalization. To address this issue, we present a novel framework that uses
adversarially learned transformations (ALT) using a neural network to model
plausible, yet hard image transformations that fool the classifier. This
network is randomly initialized for each batch and trained for a fixed number
of steps to maximize classification error. Further, we enforce consistency
between the classifier's predictions on the clean and transformed images. With
extensive empirical analysis, we find that this new form of adversarial
transformations achieve both objectives of diversity and hardness
simultaneously, outperforming all existing techniques on competitive benchmarks
for single source domain generalization. We also show that ALT can naturally
work with existing diversity modules to produce highly distinct, and large
transformations of the source domain leading to state-of-the-art performance.Comment: WACV 2023. Code: https://github.com/tejas-gokhale/AL