Learning an empirically effective model with generalization using limited
data is a challenging task for deep neural networks. In this paper, we propose
a novel learning framework called PurifiedLearning to exploit task-irrelevant
features extracted from task-irrelevant labels when training models on
small-scale datasets. Particularly, we purify feature representations by using
the expression of task-irrelevant information, thus facilitating the learning
process of classification. Our work is built on solid theoretical analysis and
extensive experiments, which demonstrate the effectiveness of PurifiedLearning.
According to the theory we proved, PurifiedLearning is model-agnostic and
doesn't have any restrictions on the model needed, so it can be combined with
any existing deep neural networks with ease to achieve better performance. The
source code of this paper will be available in the future for reproducibility.Comment: arXiv admin note: substantial text overlap with arXiv:2011.0847