We propose a new unsupervised method for lexical substitution using
pre-trained language models. Compared to previous approaches that use the
generative capability of language models to predict substitutes, our method
retrieves substitutes based on the similarity of contextualised and
decontextualised word embeddings, i.e. the average contextual representation of
a word in multiple contexts. We conduct experiments in English and Italian, and
show that our method substantially outperforms strong baselines and establishes
a new state-of-the-art without any explicit supervision or fine-tuning. We
further show that our method performs particularly well at predicting
low-frequency substitutes, and also generates a diverse list of substitute
candidates, reducing morphophonetic or morphosyntactic biases induced by
article-noun agreement.Comment: 14 pages, accepted for COLING 202