1 research outputs found
Stochastic Variance Reduction Gradient for a Non-convex Problem Using Graduated Optimization
In machine learning, nonconvex optimization problems with multiple local
optimums are often encountered. Graduated Optimization Algorithm (GOA) is a
popular heuristic method to obtain global optimums of nonconvex problems
through progressively minimizing a series of convex approximations to the
nonconvex problems more and more accurate. Recently, such an algorithm GradOpt
based on GOA is proposed with amazing theoretical and experimental results, but
it mainly studies the problem which consists of one nonconvex part. This paper
aims to find the global solution of a nonconvex objective with a convex part
plus a nonconvex part based on GOA. By graduating approximating non-convex part
of the problem and minimizing them with the Stochastic Variance Reduced
Gradient (SVRG) or proximal SVRG, two new algorithms, SVRG-GOA and PSVRG-GOA,
are proposed. We prove that the new algorithms have lower iteration complexity
() than GradOpt (). Some tricks, such as
enlarging shrink factor, using project step, stochastic gradient, and
mini-batch skills, are also given to accelerate the convergence speed of the
proposed algorithms. Experimental results illustrate that the new algorithms
with the similar performance can converge to 'global' optimums of the nonconvex
problems, and they converge faster than the GradOpt and the nonconvex proximal
SVRG.Comment: 15 pages, 5 figure