Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE) aims
at predicting the relation between a pair of sentences (premise and hypothesis)
as entailment, contradiction or semantic independence. Although deep learning
models have shown promising performance for NLI in recent years, they rely on
large scale expensive human-annotated datasets. Semi-supervised learning (SSL)
is a popular technique for reducing the reliance on human annotation by
leveraging unlabeled data for training. However, despite its substantial
success on single sentence classification tasks where the challenge in making
use of unlabeled data is to assign "good enough" pseudo-labels, for NLI tasks,
the nature of unlabeled data is more complex: one of the sentences in the pair
(usually the hypothesis) along with the class label are missing from the data
and require human annotations, which makes SSL for NLI more challenging. In
this paper, we propose a novel way to incorporate unlabeled data in SSL for NLI
where we use a conditional language model, BART to generate the hypotheses for
the unlabeled sentences (used as premises). Our experiments show that our SSL
framework successfully exploits unlabeled data and substantially improves the
performance of four NLI datasets in low-resource settings. We release our code
at: https://github.com/msadat3/SSL_for_NLI.Comment: Accepted in EMNLP 2022 (Findings