4 research outputs found
Boosting Semi-Supervised Learning with Contrastive Complementary Labeling
Semi-supervised learning (SSL) has achieved great success in leveraging a
large amount of unlabeled data to learn a promising classifier. A popular
approach is pseudo-labeling that generates pseudo labels only for those
unlabeled data with high-confidence predictions. As for the low-confidence
ones, existing methods often simply discard them because these unreliable
pseudo labels may mislead the model. Nevertheless, we highlight that these data
with low-confidence pseudo labels can be still beneficial to the training
process. Specifically, although the class with the highest probability in the
prediction is unreliable, we can assume that this sample is very unlikely to
belong to the classes with the lowest probabilities. In this way, these data
can be also very informative if we can effectively exploit these complementary
labels, i.e., the classes that a sample does not belong to. Inspired by this,
we propose a novel Contrastive Complementary Labeling (CCL) method that
constructs a large number of reliable negative pairs based on the complementary
labels and adopts contrastive learning to make use of all the unlabeled data.
Extensive experiments demonstrate that CCL significantly improves the
performance on top of existing methods. More critically, our CCL is
particularly effective under the label-scarce settings. For example, we yield
an improvement of 2.43% over FixMatch on CIFAR-10 only with 40 labeled data.Comment: typos corrected, 5 figures, 3 tables