Modern neural language models that are widely used in various NLP tasks risk
memorizing sensitive information from their training data. Understanding this
memorization is important in real world applications and also from a
learning-theoretical perspective. An open question in previous studies of
language model memorization is how to filter out "common" memorization. In
fact, most memorization criteria strongly correlate with the number of
occurrences in the training set, capturing memorized familiar phrases, public
knowledge, templated texts, or other repeated data. We formulate a notion of
counterfactual memorization which characterizes how a model's predictions
change if a particular document is omitted during training. We identify and
study counterfactually-memorized training examples in standard text datasets.
We estimate the influence of each memorized training example on the validation
set and on generated texts, showing how this can provide direct evidence of the
source of memorization at test time.Comment: NeurIPS 2023; 42 pages, 33 figure