Existing large language models (LLMs) are known for generating "hallucinated"
content, namely a fabricated text of plausibly looking, yet unfounded, facts.
To identify when these hallucination scenarios occur, we examine the properties
of the generated text in the embedding space. Specifically, we draw inspiration
from the dynamic mode decomposition (DMD) tool in analyzing the pattern
evolution of text embeddings across sentences. We empirically demonstrate how
the spectrum of sentence embeddings over paragraphs is constantly low-rank for
the generated text, unlike that of the ground-truth text. Importantly, we find
that evaluation cases having LLM hallucinations correspond to ground-truth
embedding patterns with a higher number of modes being poorly approximated by
the few modes associated with LLM embedding patterns. In analogy to near-field
electromagnetic evanescent waves, the embedding DMD eigenmodes of the generated
text with hallucinations vanishes quickly across sentences as opposed to those
of the ground-truth text. This suggests that the hallucinations result from
both the generation techniques and the underlying representation