Artificial networks have been studied through the prism of statistical
mechanics as disordered systems since the 80s, starting from the simple models
of Hopfield's associative memory and the single-neuron perceptron classifier.
Assuming data is generated by a teacher model, asymptotic generalisation
predictions were originally derived using the replica method and the online
learning dynamics has been described in the large system limit. In this
chapter, we review the key original ideas of this literature along with their
heritage in the ongoing quest to understand the efficiency of modern deep
learning algorithms. One goal of current and future research is to characterize
the bias of the learning algorithms toward well-generalising minima in a
complex overparametrized loss landscapes with many solutions perfectly
interpolating the training data. Works on perceptrons, two-layer committee
machines and kernel-like learning machines shed light on these benefits of
overparametrization. Another goal is to understand the advantage of depth while
models now commonly feature tens or hundreds of layers. If replica computations
apparently fall short in describing general deep neural networks learning,
studies of simplified linear or untrained models, as well as the derivation of
scaling laws provide the first elements of answers.Comment: Contribution to the book Spin Glass Theory and Far Beyond: Replica
Symmetry Breaking after 40 Years; Chap. 2