Long samples of text from neural language models can be of poor quality.
Truncation sampling algorithms--like top-p or top-k -- address this by
setting some words' probabilities to zero at each step. This work provides
framing for the aim of truncation, and an improved algorithm for that aim. We
propose thinking of a neural language model as a mixture of a true distribution
and a smoothing distribution that avoids infinite perplexity. In this light,
truncation algorithms aim to perform desmoothing, estimating a subset of the
support of the true distribution. Finding a good subset is crucial: we show
that top-p unnecessarily truncates high-probability words, for example
causing it to truncate all words but Trump for a document that starts with
Donald. We introduce η-sampling, which truncates words below an
entropy-dependent probability threshold. Compared to previous algorithms,
η-sampling generates more plausible long English documents according to
humans, is better at breaking out of repetition, and behaves more reasonably on
a battery of test distributions.Comment: Findings of EMNLP, + small fixe