Studying data memorization in neural language models helps us understand the
risks (e.g., to privacy or copyright) associated with models regurgitating
training data and aids in the development of countermeasures. Many prior works
-- and some recently deployed defenses -- focus on "verbatim memorization",
defined as a model generation that exactly matches a substring from the
training set. We argue that verbatim memorization definitions are too
restrictive and fail to capture more subtle forms of memorization.
Specifically, we design and implement an efficient defense that perfectly
prevents all verbatim memorization. And yet, we demonstrate that this "perfect"
filter does not prevent the leakage of training data. Indeed, it is easily
circumvented by plausible and minimally modified "style-transfer" prompts --
and in some cases even the non-modified original prompts -- to extract
memorized information. We conclude by discussing potential alternative
definitions and why defining memorization is a difficult yet crucial open
question for neural language models