Random features models play a distinguished role in the theory of deep
learning, describing the behavior of neural networks close to their
infinite-width limit. In this work, we present a thorough analysis of the
generalization performance of random features models for generic supervised
learning problems with Gaussian data. Our approach, built with tools from the
statistical mechanics of disordered systems, maps the random features model to
an equivalent polynomial model, and allows us to plot average generalization
curves as functions of the two main control parameters of the problem: the
number of random features N and the size P of the training set, both
assumed to scale as powers in the input dimension D. Our results extend the
case of proportional scaling between N, P and D. They are in accordance
with rigorous bounds known for certain particular learning tasks and are in
quantitative agreement with numerical experiments performed over many order of
magnitudes of N and P. We find good agreement also far from the asymptotic
limits where D→∞ and at least one between P/DK, N/DL remains
finite.Comment: 11 pages + appendix, 4 figures. Comments are welcom