Recent works demonstrated the existence of a double-descent phenomenon for
the generalization error of neural networks, where highly overparameterized
models escape overfitting and achieve good test performance, at odds with the
standard bias-variance trade-off described by statistical learning theory. In
the present work, we explore a link between this phenomenon and the increase of
complexity and sensitivity of the function represented by neural networks. In
particular, we study the Boolean mean dimension (BMD), a metric developed in
the context of Boolean function analysis. Focusing on a simple teacher-student
setting for the random feature model, we derive a theoretical analysis based on
the replica method that yields an interpretable expression for the BMD, in the
high dimensional regime where the number of data points, the number of
features, and the input size grow to infinity. We find that, as the degree of
overparameterization of the network is increased, the BMD reaches an evident
peak at the interpolation threshold, in correspondence with the generalization
error peak, and then slowly approaches a low asymptotic value. The same
phenomenology is then traced in numerical experiments with different model
classes and training setups. Moreover, we find empirically that adversarially
initialized models tend to show higher BMD values, and that models that are
more robust to adversarial attacks exhibit a lower BMD.Comment: 37 pages, 31 figure