Augmenting a language model (LM) with k-nearest neighbors (kNN) retrieval
on its training data alone can decrease its perplexity, though the underlying
reasons for this remain elusive. In this work, we rule out one previously
posited possibility -- the "softmax bottleneck." We then create a new dataset
to evaluate LM generalization ability in the setting where training data
contains additional information that is not causally relevant. This task is
challenging even for GPT-3.5 Turbo. We show that, for both GPT-2 and Mistral
7B, kNN retrieval augmentation consistently improves performance in this
setting. Finally, to make kNN retrieval more accessible, we propose using a
multi-layer perceptron model that maps datastore keys to values as a drop-in
replacement for traditional retrieval. This reduces storage costs by over 25x.Comment: Accepted to NAACL 202