Machine learning algorithms typically assume independent and identically
distributed samples in training and at test time. Much work has shown that
high-performing ML classifiers can degrade significantly and provide
overly-confident, wrong classification predictions, particularly for
out-of-distribution (OOD) inputs. Conditional language models (CLMs) are
predominantly trained to classify the next token in an output sequence, and may
suffer even worse degradation on OOD inputs as the prediction is done
auto-regressively over many steps. Furthermore, the space of potential
low-quality outputs is larger as arbitrary text can be generated and it is
important to know when to trust the generated output. We present a highly
accurate and lightweight OOD detection method for CLMs, and demonstrate its
effectiveness on abstractive summarization and translation. We also show how
our method can be used under the common and realistic setting of distribution
shift for selective generation (analogous to selective prediction for
classification) of high-quality outputs, while automatically abstaining from
low-quality ones, enabling safer deployment of generative language models