This paper considers the learning of logical (Boolean) functions with focus
on the generalization on the unseen (GOTU) setting, a strong case of
out-of-distribution generalization. This is motivated by the fact that the rich
combinatorial nature of data in certain reasoning tasks (e.g.,
arithmetic/logic) makes representative data sampling challenging, and learning
successfully under GOTU gives a first vignette of an 'extrapolating' or
'reasoning' learner. We then study how different network architectures trained
by (S)GD perform under GOTU and provide both theoretical and experimental
evidence that for a class of network models including instances of
Transformers, random features models, and diagonal linear networks, a
min-degree-interpolator is learned on the unseen. We also provide evidence that
other instances with larger learning rates or mean-field networks reach leaky
min-degree solutions. These findings lead to two implications: (1) we provide
an explanation to the length generalization problem (e.g., Anil et al. 2022);
(2) we introduce a curriculum learning algorithm called Degree-Curriculum that
learns monomials more efficiently by incrementing supports.Comment: To appear in ICML 202