142 research outputs found
Convergence Rate of Frank-Wolfe for Non-Convex Objectives
We give a simple proof that the Frank-Wolfe algorithm obtains a stationary
point at a rate of on non-convex objectives with a Lipschitz
continuous gradient. Our analysis is affine invariant and is the first, to the
best of our knowledge, giving a similar rate to what was already proven for
projected gradient methods (though on slightly different measures of
stationarity).Comment: 6 page
Accelerating Stochastic Composition Optimization
Consider the stochastic composition optimization problem where the objective
is a composition of two expected-value functions. We propose a new stochastic
first-order method, namely the accelerated stochastic compositional proximal
gradient (ASC-PG) method, which updates based on queries to the sampling oracle
using two different timescales. The ASC-PG is the first proximal gradient
method for the stochastic composition problem that can deal with nonsmooth
regularization penalty. We show that the ASC-PG exhibits faster convergence
than the best known algorithms, and that it achieves the optimal sample-error
complexity in several important special cases. We further demonstrate the
application of ASC-PG to reinforcement learning and conduct numerical
experiments
- …