2 research outputs found

    Collective evolution of weights in wide neural networks

    Full text link
    We derive a nonlinear integro-differential transport equation describing collective evolution of weights under gradient descent in large-width neural-network-like models. We characterize stationary points of the evolution and analyze several scenarios where the transport equation can be solved approximately. We test our general method in the special case of linear free-knot splines, and find good agreement between theory and experiment in observations of global optima, stability of stationary points, and convergence rates.Comment: 18 pages, 5 figure

    Dynamically Stable Infinite-Width Limits of Neural Classifiers

    Full text link
    Recent research has been focused on two different approaches to studying neural networks training in the limit of infinite width (1) a mean-field (MF) and (2) a constant neural tangent kernel (NTK) approximations. These two approaches have different scaling of hyperparameters with a width of a network layer and as a result different infinite width limit models. We propose a general framework to study how the limit behavior of neural models depends on the scaling of hyperparameters with a network width. Our framework allows us to derive scaling for existing MF and NTK limits, as well as an uncountable number of other scalings that lead to a dynamically stable limit behavior of corresponding models. However, only a finite number of distinct limit models are induced by these scalings. Each distinct limit model corresponds to a unique combination of such properties as boundedness of logits and tangent kernels at initialization or stationarity of tangent kernels. Existing MF and NTK limit models, as well as one novel limit model, satisfy most of the properties demonstrated by finite-width models. We also propose a novel initialization-corrected mean-field limit that satisfies all properties noted above, and its corresponding model is a simple modification for a finite-width model. Source code to reproduce all the reported results is available on GitHub.Comment: 25 pages, 7 figures. Submitted to the NeurIPS'2020 conferenc
    corecore