142 research outputs found

    Convergence Rate of Frank-Wolfe for Non-Convex Objectives

    Full text link
    We give a simple proof that the Frank-Wolfe algorithm obtains a stationary point at a rate of O(1/t)O(1/\sqrt{t}) on non-convex objectives with a Lipschitz continuous gradient. Our analysis is affine invariant and is the first, to the best of our knowledge, giving a similar rate to what was already proven for projected gradient methods (though on slightly different measures of stationarity).Comment: 6 page

    Accelerating Stochastic Composition Optimization

    Full text link
    Consider the stochastic composition optimization problem where the objective is a composition of two expected-value functions. We propose a new stochastic first-order method, namely the accelerated stochastic compositional proximal gradient (ASC-PG) method, which updates based on queries to the sampling oracle using two different timescales. The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty. We show that the ASC-PG exhibits faster convergence than the best known algorithms, and that it achieves the optimal sample-error complexity in several important special cases. We further demonstrate the application of ASC-PG to reinforcement learning and conduct numerical experiments
    • …
    corecore