Contrastive learning has been the dominant approach to train state-of-the-art
sentence embeddings. Previous studies have typically learned sentence
embeddings either through the use of human-annotated natural language inference
(NLI) data or via large-scale unlabeled sentences in an unsupervised manner.
However, even in the case of unlabeled data, their acquisition presents
challenges in certain domains due to various reasons. To address these issues,
we present SynCSE, a contrastive learning framework that trains sentence
embeddings with synthesized data. Specifically, we explore utilizing large
language models to synthesize the required data samples for contrastive
learning, including (1) producing positive and negative annotations given
unlabeled sentences (SynCSE-partial), and (2) generating sentences along with
their corresponding annotations from scratch (SynCSE-scratch). Experimental
results on sentence similarity and reranking tasks indicate that both
SynCSE-partial and SynCSE-scratch greatly outperform unsupervised baselines,
and SynCSE-partial even achieves comparable performance to the supervised
models in most settings.Comment: Preprin