Semi-supervised learning (SSL) is a popular setting aiming to effectively
utilize unlabelled data to improve model performance in downstream natural
language processing (NLP) tasks. Currently, there are two popular approaches to
make use of unlabelled data: Self-training (ST) and Task-adaptive pre-training
(TAPT). ST uses a teacher model to assign pseudo-labels to the unlabelled data,
while TAPT continues pre-training on the unlabelled data before fine-tuning. To
the best of our knowledge, the effectiveness of TAPT in SSL tasks has not been
systematically studied, and no previous work has directly compared TAPT and ST
in terms of their ability to utilize the pool of unlabelled data. In this
paper, we provide an extensive empirical study comparing five state-of-the-art
ST approaches and TAPT across various NLP tasks and data sizes, including in-
and out-of-domain settings. Surprisingly, we find that TAPT is a strong and
more robust SSL learner, even when using just a few hundred unlabelled samples
or in the presence of domain shifts, compared to more sophisticated ST
approaches, and tends to bring greater improvements in SSL than in
fully-supervised settings. Our further analysis demonstrates the risks of using
ST approaches when the size of labelled or unlabelled data is small or when
domain shifts exist. We offer a fresh perspective for future SSL research,
suggesting the use of unsupervised pre-training objectives over dependency on
pseudo labels