110 research outputs found
Scale-invariant Bayesian Neural Networks with Connectivity Tangent Kernel
Explaining generalizations and preventing over-confident predictions are
central goals of studies on the loss landscape of neural networks. Flatness,
defined as loss invariability on perturbations of a pre-trained solution, is
widely accepted as a predictor of generalization in this context. However, the
problem that flatness and generalization bounds can be changed arbitrarily
according to the scale of a parameter was pointed out, and previous studies
partially solved the problem with restrictions: Counter-intuitively, their
generalization bounds were still variant for the function-preserving parameter
scaling transformation or limited only to an impractical network structure. As
a more fundamental solution, we propose new prior and posterior distributions
invariant to scaling transformations by \textit{decomposing} the scale and
connectivity of parameters, thereby allowing the resulting generalization bound
to describe the generalizability of a broad class of networks with the more
practical class of transformations such as weight decay with batch
normalization. We also show that the above issue adversely affects the
uncertainty calibration of Laplace approximation and propose a solution using
our invariant posterior. We empirically demonstrate our posterior provides
effective flatness and calibration measures with low complexity in such a
practical parameter transformation case, supporting its practical effectiveness
in line with our rationale
A modern look at the relationship between sharpness and generalization
Sharpness of minima is a promising quantity that can correlate with
generalization in deep networks and, when optimized during training, can
improve generalization. However, standard sharpness is not invariant under
reparametrizations of neural networks, and, to fix this,
reparametrization-invariant sharpness definitions have been proposed, most
prominently adaptive sharpness (Kwon et al., 2021). But does it really capture
generalization in modern practical settings? We comprehensively explore this
question in a detailed study of various definitions of adaptive sharpness in
settings ranging from training from scratch on ImageNet and CIFAR-10 to
fine-tuning CLIP on ImageNet and BERT on MNLI. We focus mostly on transformers
for which little is known in terms of sharpness despite their widespread usage.
Overall, we observe that sharpness does not correlate well with generalization
but rather with some training parameters like the learning rate that can be
positively or negatively correlated with generalization depending on the setup.
Interestingly, in multiple cases, we observe a consistent negative correlation
of sharpness with out-of-distribution error implying that sharper minima can
generalize better. Finally, we illustrate on a simple model that the right
sharpness measure is highly data-dependent, and that we do not understand well
this aspect for realistic data distributions. The code of our experiments is
available at https://github.com/tml-epfl/sharpness-vs-generalization
- …