FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning

Abstract

Pseudo labeling and consistency regularization approaches based on confidencethresholding have made great progress in semi-supervised learning (SSL).However, we argue that existing methods might fail to adopt suitable thresholdssince they either use a pre-defined / fixed threshold or an ad-hoc thresholdadjusting scheme, resulting in inferior performance and slow convergence. Wefirst analyze a motivating example to achieve some intuitions on therelationship between the desirable threshold and model's learning status. Basedon the analysis, we hence propose FreeMatch to define and adjust the confidencethreshold in a self-adaptive manner according to the model's learning status.We further introduce a self-adaptive class fairness regularization penalty thatencourages the model to produce diverse predictions during the early stages oftraining. Extensive experimental results indicate the superiority of FreeMatchespecially when the labeled data are extremely rare. FreeMatch achieves 5.78%,13.59%, and 1.28% error rate reduction over the latest state-of-the-art methodFlexMatch on CIFAR-10 with 1 label per class, STL-10 with 4 labels per class,and ImageNet with 100 labels per class, respectively.<br

    Similar works