In recent years, research on learning with noisy labels has focused on
devising novel algorithms that can achieve robustness to noisy training labels
while generalizing to clean data. These algorithms often incorporate
sophisticated techniques, such as noise modeling, label correction, and
co-training. In this study, we demonstrate that a simple baseline using
cross-entropy loss, combined with widely used regularization strategies like
learning rate decay, model weights average, and data augmentations, can
outperform state-of-the-art methods. Our findings suggest that employing a
combination of regularization strategies can be more effective than intricate
algorithms in tackling the challenges of learning with noisy labels. While some
of these regularization strategies have been utilized in previous noisy label
learning research, their full potential has not been thoroughly explored. Our
results encourage a reevaluation of benchmarks for learning with noisy labels
and prompt reconsideration of the role of specialized learning algorithms
designed for training with noisy labels