1 research outputs found
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Despite its success in a wide range of applications, characterizing the
generalization properties of stochastic gradient descent (SGD) in non-convex
deep learning problems is still an important challenge. While modeling the
trajectories of SGD via stochastic differential equations (SDE) under
heavy-tailed gradient noise has recently shed light over several peculiar
characteristics of SGD, a rigorous treatment of the generalization properties
of such SDEs in a learning theoretical framework is still missing. Aiming to
bridge this gap, in this paper, we prove generalization bounds for SGD under
the assumption that its trajectories can be well-approximated by a \emph{Feller
process}, which defines a rich class of Markov processes that include several
recent SDE representations (both Brownian or heavy-tailed) as its special case.
We show that the generalization error can be controlled by the \emph{Hausdorff
dimension} of the trajectories, which is intimately linked to the tail behavior
of the driving process. Our results imply that heavier-tailed processes should
achieve better generalization; hence, the tail-index of the process can be used
as a notion of "capacity metric". We support our theory with experiments on
deep neural networks illustrating that the proposed capacity metric accurately
estimates the generalization error, and it does not necessarily grow with the
number of parameters unlike the existing capacity metrics in the literature.Comment: 22 Pages, Published at NeurIPS 2020 (Spotlight