127,244 research outputs found

    Training (Overparametrized) Neural Networks in Near-Linear Time

    Get PDF
    The slow convergence rate and pathological curvature issues of first-order gradient methods for training deep neural networks, initiated an ongoing effort for developing faster second\mathit{second}-order\mathit{order} optimization algorithms beyond SGD, without compromising the generalization error. Despite their remarkable convergence rate (independent\mathit{independent} of the training batch size nn), second-order algorithms incur a daunting slowdown in the cost\mathit{cost} per\mathit{per} iteration\mathit{iteration} (inverting the Hessian matrix of the loss function), which renders them impractical. Very recently, this computational overhead was mitigated by the works of [ZMG19,CGH+19}, yielding an O(mn2)O(mn^2)-time second-order algorithm for training two-layer overparametrized neural networks of polynomial width mm. We show how to speed up the algorithm of [CGH+19], achieving an O~(mn)\tilde{O}(mn)-time backpropagation algorithm for training (mildly overparametrized) ReLU networks, which is near-linear in the dimension (mnmn) of the full gradient (Jacobian) matrix. The centerpiece of our algorithm is to reformulate the Gauss-Newton iteration as an â„“2\ell_2-regression problem, and then use a Fast-JL type dimension reduction to precondition\mathit{precondition} the underlying Gram matrix in time independent of MM, allowing to find a sufficiently good approximate solution via first\mathit{first}-order\mathit{order} conjugate gradient. Our result provides a proof-of-concept that advanced machinery from randomized linear algebra -- which led to recent breakthroughs in convex\mathit{convex} optimization\mathit{optimization} (ERM, LPs, Regression) -- can be carried over to the realm of deep learning as well
    • …
    corecore