The assumption that many forms of high-dimensional data, such as images,
actually live on low-dimensional manifolds, sometimes known as the manifold
hypothesis, underlies much of our intuition for how and why deep learning
works. Despite the central role that they play in our intuition, data manifolds
are surprisingly hard to measure in the case of high-dimensional, sparsely
sampled image datasets. This is particularly frustrating since the capability
to measure data manifolds would provide a revealing window into the inner
workings and dynamics of deep learning models. Motivated by this, we introduce
neural frames, a novel and easy to use tool inspired by the notion of a frame
from differential geometry. Neural frames can be used to explore the local
neighborhoods of data manifolds as they pass through the hidden layers of
neural networks even when one only has a single datapoint available. We present
a mathematical framework for neural frames and explore some of their
properties. We then use them to make a range of observations about how modern
model architectures and training routines, such as heavy augmentation and
adversarial training, affect the local behavior of a model.Comment: 21 page