research

Surprises in High-Dimensional Ridgeless Least Squares Interpolation

Abstract

Interpolators -- estimators that achieve zero training error -- have attracted growing attention in machine learning, mainly because state-of-the art neural networks appear to be models of this type. In this paper, we study minimum β„“2\ell_2 norm (``ridgeless'') interpolation in high-dimensional least squares regression. We consider two different models for the feature distribution: a linear model, where the feature vectors xi∈Rpx_i \in {\mathbb R}^p are obtained by applying a linear transform to a vector of i.i.d.\ entries, xi=Ξ£1/2zix_i = \Sigma^{1/2} z_i (with zi∈Rpz_i \in {\mathbb R}^p); and a nonlinear model, where the feature vectors are obtained by passing the input through a random one-layer neural network, xi=Ο†(Wzi)x_i = \varphi(W z_i) (with zi∈Rdz_i \in {\mathbb R}^d, W∈RpΓ—dW \in {\mathbb R}^{p \times d} a matrix of i.i.d.\ entries, and Ο†\varphi an activation function acting componentwise on WziW z_i). We recover -- in a precise quantitative way -- several phenomena that have been observed in large-scale neural networks and kernel machines, including the "double descent" behavior of the prediction risk, and the potential benefits of overparametrization.Comment: 68 pages; 16 figures. This revision contains non-asymptotic version of earlier results, and results for general coefficient

    Similar works

    Full text

    thumbnail-image

    Available Versions