Data augmentation has been proven effective for training high-accuracy
convolutional neural network classifiers by preventing overfitting. However,
building deep neural networks in real-world scenarios requires not only high
accuracy on clean data but also robustness when data distributions shift. While
prior methods have proposed that there is a trade-off between accuracy and
robustness, we propose IPMix, a simple data augmentation approach to improve
robustness without hurting clean accuracy. IPMix integrates three levels of
data augmentation (image-level, patch-level, and pixel-level) into a coherent
and label-preserving technique to increase the diversity of training data with
limited computational overhead. To further improve the robustness, IPMix
introduces structural complexity at different levels to generate more diverse
images and adopts the random mixing method for multi-scale information fusion.
Experiments demonstrate that IPMix outperforms state-of-the-art corruption
robustness on CIFAR-C and ImageNet-C. In addition, we show that IPMix also
significantly improves the other safety measures, including robustness to
adversarial perturbations, calibration, prediction consistency, and anomaly
detection, achieving state-of-the-art or comparable results on several
benchmarks, including ImageNet-R, ImageNet-A, and ImageNet-O.Comment: NeurIPS 202