85 research outputs found

    From Proximal Point Method to Nesterov's Acceleration

    Full text link
    The proximal point method (PPM) is a fundamental method in optimization that is often used as a building block for fast optimization algorithms. In this work, building on a recent work by Defazio (2019), we provide a complete understanding of Nesterov's accelerated gradient method (AGM) by establishing quantitative and analytical connections between PPM and AGM. The main observation in this paper is that AGM is in fact equal to a simple approximation of PPM, which results in an elementary derivation of the mysterious updates of AGM as well as its step sizes. This connection also leads to a conceptually simple analysis of AGM based on the standard analysis of PPM. This view naturally extends to the strongly convex case and also motivates other accelerated methods for practically relevant settings.Comment: 14 pages; Section 4 updated; Remark 5 added; comments would be appreciated

    Connections Between Adaptive Control and Optimization in Machine Learning

    Full text link
    This paper demonstrates many immediate connections between adaptive control and optimization methods commonly employed in machine learning. Starting from common output error formulations, similarities in update law modifications are examined. Concepts in stability, performance, and learning, common to both fields are then discussed. Building on the similarities in update laws and common concepts, new intersections and opportunities for improved algorithm analysis are provided. In particular, a specific problem related to higher order learning is solved through insights obtained from these intersections.Comment: 18 page

    On the Stability and Convergence of Stochastic Gradient Descent with Momentum

    Full text link
    While momentum-based methods, in conjunction with the stochastic gradient descent, are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods. In practice, the momentum parameter is often chosen in a heuristic fashion with little theoretical guidance. In the first part of this paper, for the case of general loss functions, we analyze a modified momentum-based update rule, i.e., the method of early momentum, and develop an upper-bound on the generalization error using the framework of algorithmic stability. Our results show that machine learning models can be trained for multiple epochs of this method while their generalization errors are bounded. We also study the convergence of the method of early momentum by establishing an upper-bound on the expected norm of the gradient. In the second part of the paper, we focus on the case of strongly convex loss functions and the classical heavy-ball momentum update rule. We use the framework of algorithmic stability to provide an upper-bound on the generalization error of the stochastic gradient method with momentum. We also develop an upper-bound on the expected true risk, in terms of the number of training steps, the size of the training set, and the momentum parameter. Experimental evaluations verify the consistency between the numerical results and our theoretical bounds and the effectiveness of the method of early momentum for the case of non-convex loss functions
    • …
    corecore