Despite substantial progress in the field of deep learning, overfitting
persists as a critical challenge, and data augmentation has emerged as a
particularly promising approach due to its capacity to enhance model
generalization in various computer vision tasks. While various strategies have
been proposed, Mixed Sample Data Augmentation (MSDA) has shown great potential
for enhancing model performance and generalization. We introduce a novel mixup
method called MiAMix, which stands for Multi-stage Augmented Mixup. MiAMix
integrates image augmentation into the mixup framework, utilizes multiple
diversified mixing methods concurrently, and improves the mixing method by
randomly selecting mixing mask augmentation methods. Recent methods utilize
saliency information and the MiAMix is designed for computational efficiency as
well, reducing additional overhead and offering easy integration into existing
training pipelines. We comprehensively evaluate MiaMix using four image
benchmarks and pitting it against current state-of-the-art mixed sample data
augmentation techniques to demonstrate that MIAMix improves performance without
heavy computational overhead