Over-parameterized residual networks (ResNets) are amongst the most
successful convolutional neural architectures for image processing. Here we
study their properties through their Gaussian Process and Neural Tangent
kernels. We derive explicit formulas for these kernels, analyze their spectra,
and provide bounds on their implied condition numbers. Our results indicate
that (1) with ReLU activation, the eigenvalues of these residual kernels decay
polynomially at a similar rate compared to the same kernels when skip
connections are not used, thus maintaining a similar frequency bias; (2)
however, residual kernels are more locally biased. Our analysis further shows
that the matrices obtained by these residual kernels yield favorable condition
numbers at finite depths than those obtained without the skip connections,
enabling therefore faster convergence of training with gradient descent