There has been significant attention devoted to the effectiveness of various
domains, such as semi-supervised learning, contrastive learning, and
meta-learning, in enhancing the performance of methods for noisy label learning
(NLL) tasks. However, most existing methods still depend on prior assumptions
regarding clean samples amidst different sources of noise (\eg, a pre-defined
drop rate or a small subset of clean samples). In this paper, we propose a
simple yet powerful idea called \textbf{NPN}, which revolutionizes
\textbf{N}oisy label learning by integrating \textbf{P}artial label learning
(PLL) and \textbf{N}egative learning (NL). Toward this goal, we initially
decompose the given label space adaptively into the candidate and complementary
labels, thereby establishing the conditions for PLL and NL. We propose two
adaptive data-driven paradigms of label disambiguation for PLL: hard
disambiguation and soft disambiguation. Furthermore, we generate reliable
complementary labels using all non-candidate labels for NL to enhance model
robustness through indirect supervision. To maintain label reliability during
the later stage of model training, we introduce a consistency regularization
term that encourages agreement between the outputs of multiple augmentations.
Experiments conducted on both synthetically corrupted and real-world noisy
datasets demonstrate the superiority of NPN compared to other state-of-the-art
(SOTA) methods. The source code has been made available at
{\color{purple}{\url{https://github.com/NUST-Machine-Intelligence-Laboratory/NPN}}}.Comment: accepted by AAAI 202