Prior works have found it beneficial to combine provably noise-robust loss
functions e.g., mean absolute error (MAE) with standard categorical loss
function e.g. cross entropy (CE) to improve their learnability. Here, we
propose to use Jensen-Shannon divergence as a noise-robust loss function and
show that it interestingly interpolate between CE and MAE with a controllable
mixing parameter. Furthermore, we make a crucial observation that CE exhibit
lower consistency around noisy data points. Based on this observation, we adopt
a generalized version of the Jensen-Shannon divergence for multiple
distributions to encourage consistency around data points. Using this loss
function, we show state-of-the-art results on both synthetic (CIFAR), and
real-world (e.g., WebVision) noise with varying noise rates.Comment: Neural Information Processing Systems (NeurIPS 2021