Deep neural networks achieve superior performance for learning from
independent and identically distributed (i.i.d.) data. However, their
performance deteriorates significantly when handling out-of-distribution (OoD)
data, where the training and test are drawn from different distributions. In
this paper, we explore utilizing the generative models as a data augmentation
source for improving out-of-distribution robustness of neural classifiers.
Specifically, we develop a simple yet effective method called Generative
Interpolation to fuse generative models trained from multiple domains for
synthesizing diverse OoD samples. Training a generative model directly on the
source domains tends to suffer from mode collapse and sometimes amplifies the
data bias. Instead, we first train a StyleGAN model on one source domain and
then fine-tune it on the other domains, resulting in many correlated generators
where their model parameters have the same initialization thus are aligned. We
then linearly interpolate the model parameters of the generators to spawn new
sets of generators. Such interpolated generators are used as an extra data
augmentation source to train the classifiers. The interpolation coefficients
can flexibly control the augmentation direction and strength. In addition, a
style-mixing mechanism is applied to further improve the diversity of the
generated OoD samples. Our experiments show that the proposed method explicitly
increases the diversity of training domains and achieves consistent
improvements over baselines across datasets and multiple different distribution
shifts