In this work, we study robust deep learning against abnormal training data
from the perspective of example weighting built in empirical loss functions,
i.e., gradient magnitude with respect to logits, an angle that is not
thoroughly studied so far. Consequently, we have two key findings: (1) Mean
Absolute Error (MAE) Does Not Treat Examples Equally. We present new
observations and insightful analysis about MAE, which is theoretically proved
to be noise-robust. First, we reveal its underfitting problem in practice.
Second, we analyse that MAE's noise-robustness is from emphasising on uncertain
examples instead of treating training samples equally, as claimed in prior
work. (2) The Variance of Gradient Magnitude Matters. We propose an effective
and simple solution to enhance MAE's fitting ability while preserving its
noise-robustness. Without changing MAE's overall weighting scheme, i.e., what
examples get higher weights, we simply change its weighting variance
non-linearly so that the impact ratio between two examples are adjusted. Our
solution is termed Improved MAE (IMAE). We prove IMAE's effectiveness using
extensive experiments: image classification under clean labels, synthetic label
noise, and real-world unknown noise. We conclude IMAE is superior to CCE, the
most popular loss for training DNNs.Comment: Updated Version. IMAE for Noise-Robust Learning: Mean Absolute Error
Does Not Treat Examples Equally and Gradient Magnitude's Variance Matters
Code:
\url{https://github.com/XinshaoAmosWang/Improving-Mean-Absolute-Error-against-CCE}.
Please feel free to contact for discussions or implementation problem