46,866 research outputs found
Convergence Theory of Learning Over-parameterized ResNet: A Full Characterization
ResNet structure has achieved great empirical success since its debut. Recent
work established the convergence of learning over-parameterized ResNet with a
scaling factor on the residual branch where is the network
depth. However, it is not clear how learning ResNet behaves for other values of
. In this paper, we fully characterize the convergence theory of gradient
descent for learning over-parameterized ResNet with different values of .
Specifically, with hiding logarithmic factor and constant coefficients, we show
that for gradient descent is guaranteed to converge to the
global minma, and especially when the convergence is irrelevant
of the network depth. Conversely, we show that for ,
the forward output grows at least with rate in expectation and then the
learning fails because of gradient explosion for large . This means the
bound is sharp for learning ResNet with arbitrary depth.
To the best of our knowledge, this is the first work that studies learning
ResNet with full range of .Comment: 31 page
- …