272 research outputs found
When do Models Generalize? A Perspective from Data-Algorithm Compatibility
One of the major open problems in machine learning is to characterize
generalization in the overparameterized regime, where most traditional
generalization bounds become inconsistent (Nagarajan and Kolter, 2019). In many
scenarios, their failure can be attributed to obscuring the crucial interplay
between the training algorithm and the underlying data distribution. To address
this issue, we propose a concept named compatibility, which quantitatively
characterizes generalization in a both data-relevant and algorithm-relevant
manner. By considering the entire training trajectory and focusing on
early-stopping iterates, compatibility exploits the data and the algorithm
information and is therefore a more suitable notion for generalization. We
validate this by theoretically studying compatibility under the setting of
solving overparameterized linear regression with gradient descent.
Specifically, we perform a data-dependent trajectory analysis and derive a
sufficient condition for compatibility in such a setting. Our theoretical
results demonstrate that in the sense of compatibility, generalization holds
with significantly weaker restrictions on the problem instance than the
previous last iterate analysis
Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks
Empirical studies show that gradient-based methods can learn deep neural
networks (DNNs) with very good generalization performance in the
over-parameterization regime, where DNNs can easily fit a random labeling of
the training data. Very recently, a line of work explains in theory that with
over-parameterization and proper random initialization, gradient-based methods
can find the global minima of the training loss for DNNs. However, existing
generalization error bounds are unable to explain the good generalization
performance of over-parameterized DNNs. The major limitation of most existing
generalization bounds is that they are based on uniform convergence and are
independent of the training algorithm. In this work, we derive an
algorithm-dependent generalization error bound for deep ReLU networks, and show
that under certain assumptions on the data distribution, gradient descent (GD)
with proper random initialization is able to train a sufficiently
over-parameterized DNN to achieve arbitrarily small generalization error. Our
work sheds light on explaining the good generalization performance of
over-parameterized deep neural networks.Comment: 27 pages. This version simplifies the proof and improves the
presentation in Version 3. In AAAI 202
- …