Why do large language models sometimes output factual inaccuracies and
exhibit erroneous reasoning? The brittleness of these models, particularly when
executing long chains of reasoning, currently seems to be an inevitable price
to pay for their advanced capabilities of coherently synthesizing knowledge,
pragmatics, and abstract thought. Towards making sense of this fundamentally
unsolved problem, this work identifies and analyzes the phenomenon of attention
glitches, in which the Transformer architecture's inductive biases
intermittently fail to capture robust reasoning. To isolate the issue, we
introduce flip-flop language modeling (FFLM), a parametric family of synthetic
benchmarks designed to probe the extrapolative behavior of neural language
models. This simple generative task requires a model to copy binary symbols
over long-range dependencies, ignoring the tokens in between. We find that
Transformer FFLMs suffer from a long tail of sporadic reasoning errors, some of
which we can eliminate using various regularization techniques. Our preliminary
mechanistic analyses show why the remaining errors may be very difficult to
diagnose and resolve. We hypothesize that attention glitches account for (some
of) the closed-domain hallucinations in natural LLMs.Comment: v2: NeurIPS 2023 camera-ready + data releas