312 research outputs found

    Catalyst Acceleration for Gradient-Based Non-Convex Optimization

    Get PDF
    We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorithms originally designed for minimizing convex functions. Even though these methods may originally require convexity to operate, the proposed approach allows one to use them on weakly convex objectives, which covers a large class of non-convex functions typically appearing in machine learning and signal processing. In general, the scheme is guaranteed to produce a stationary point with a worst-case efficiency typical of first-order methods, and when the objective turns out to be convex, it automatically accelerates in the sense of Nesterov and achieves near-optimal convergence rate in function values. These properties are achieved without assuming any knowledge about the convexity of the objective, by automatically adapting to the unknown weak convexity constant. We conclude the paper by showing promising experimental results obtained by applying our approach to incremental algorithms such as SVRG and SAGA for sparse matrix factorization and for learning neural networks

    Accelerating Stochastic Composition Optimization

    Full text link
    Consider the stochastic composition optimization problem where the objective is a composition of two expected-value functions. We propose a new stochastic first-order method, namely the accelerated stochastic compositional proximal gradient (ASC-PG) method, which updates based on queries to the sampling oracle using two different timescales. The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty. We show that the ASC-PG exhibits faster convergence than the best known algorithms, and that it achieves the optimal sample-error complexity in several important special cases. We further demonstrate the application of ASC-PG to reinforcement learning and conduct numerical experiments

    Smoothing Accelerated Proximal Gradient Method with Fast Convergence Rate for Nonsmooth Multi-objective Optimization

    Full text link
    This paper introduces a novel approach to nonsmooth multiobjective optimization through the proposal of a Smoothing Accelerated Proximal Gradient Method with Extrapolation Term (SAPGM). Leveraging the foundation of smoothing methods and the accelerated algorithm for multiobjective optimization by Tanabe et al., our method exhibits a refined convergence rate. Specifically, we establish that the convergence rate of our proposed method can be enhanced from O(1/k2)O(1/k^2) to o(1/k2)o(1/k^2) by incorporating a distinct extrapolation term k−1k+α−1\frac{k-1}{k + \alpha -1} with α>3\alpha > 3.Moreover, we prove that the iterates sequence is convergent to an optimal solution of the problem. Furthermore, we present an effective strategy for solving the subproblem through its dual representation, validating the efficacy of the proposed method through a series of numerical experiments.Comment: arXiv admin note: substantial text overlap with arXiv:2202.10994 by other authors; text overlap with arXiv:2110.01454 by other authors without attributio
    • …
    corecore