14 research outputs found
Regularized Newton Method with Global Convergence
We present a Newton-type method that converges fast from any initialization
and for arbitrary convex objectives with Lipschitz Hessians. We achieve this by
merging the ideas of cubic regularization with a certain adaptive
Levenberg--Marquardt penalty. In particular, we show that the iterates given by
, where is a constant, converge
globally with a rate. Our method is the first
variant of Newton's method that has both cheap iterations and provably fast
global convergence. Moreover, we prove that locally our method converges
superlinearly when the objective is strongly convex. To boost the method's
performance, we present a line search procedure that does not need
hyperparameters and is provably efficient.Comment: 21 pages, 2 figure
Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-Than-Single-Pass Optimization with Nearly Optimal Generalization
Stochastic variance-reduced gradient (SVRG) algorithms have been shown to
work favorably in solving large-scale learning problems. Despite the remarkable
success, the stochastic gradient complexity of SVRG-type algorithms usually
scales linearly with data size and thus could still be expensive for huge data.
To address this deficiency, we propose a hybrid stochastic-deterministic
minibatch proximal gradient (HSDMPG) algorithm for strongly-convex problems
that enjoys provably improved data-size-independent complexity guarantees. More
precisely, for quadratic loss of components, we prove that
HSDMPG can attain an -optimization-error
within
stochastic gradient evaluations, where is condition number. For
generic strongly convex loss functions, we prove a nearly identical complexity
bound though at the cost of slightly increased logarithmic factors. For
large-scale learning problems, our complexity bounds are superior to those of
the prior state-of-the-art SVRG algorithms with or without dependence on data
size. Particularly, in the case of
which is at the order of intrinsic excess error bound of a learning model and
thus sufficient for generalization, the stochastic gradient complexity bounds
of HSDMPG for quadratic and generic loss functions are respectively
and , which to our best knowledge, for the first time
achieve optimal generalization in less than a single pass over data. Extensive
numerical results demonstrate the computational advantages of our algorithm
over the prior ones