2 research outputs found
Reducing Runtime by Recycling Samples
Contrary to the situation with stochastic gradient descent, we argue that
when using stochastic methods with variance reduction, such as SDCA, SAG or
SVRG, as well as their variants, it could be beneficial to reuse previously
used samples instead of fresh samples, even when fresh samples are available.
We demonstrate this empirically for SDCA, SAG and SVRG, studying the optimal
sample size one should use, and also uncover be-havior that suggests running
SDCA for an integer number of epochs could be wasteful
DynaNewton - Accelerating Newton's Method for Machine Learning
Newton's method is a fundamental technique in optimization with quadratic
convergence within a neighborhood around the optimum. However reaching this
neighborhood is often slow and dominates the computational costs. We exploit
two properties specific to empirical risk minimization problems to accelerate
Newton's method, namely, subsampling training data and increasing strong
convexity through regularization. We propose a novel continuation method, where
we define a family of objectives over increasing sample sizes and with
decreasing regularization strength. Solutions on this path are tracked such
that the minimizer of the previous objective is guaranteed to be within the
quadratic convergence region of the next objective to be optimized. Thereby
every Newton iteration is guaranteed to achieve super-linear contractions with
regard to the chosen objective, which becomes a moving target. We provide a
theoretical analysis that motivates our algorithm, called DynaNewton, and
characterizes its speed of convergence. Experiments on a wide range of data
sets and problems consistently confirm the predicted computational savings