110 research outputs found
Hybrid Deterministic-Stochastic Methods for Data Fitting
Many structured data-fitting applications require the solution of an
optimization problem involving a sum over a potentially large number of
measurements. Incremental gradient algorithms offer inexpensive iterations by
sampling a subset of the terms in the sum. These methods can make great
progress initially, but often slow as they approach a solution. In contrast,
full-gradient methods achieve steady convergence at the expense of evaluating
the full objective and gradient on each iteration. We explore hybrid methods
that exhibit the benefits of both approaches. Rate-of-convergence analysis
shows that by controlling the sample size in an incremental gradient algorithm,
it is possible to maintain the steady convergence rates of full-gradient
methods. We detail a practical quasi-Newton implementation based on this
approach. Numerical experiments illustrate its potential benefits.Comment: 26 pages. Revised proofs of Theorems 2.6 and 3.1, results unchange
Semistochastic Quadratic Bound Methods
Partition functions arise in a variety of settings, including conditional
random fields, logistic regression, and latent gaussian models. In this paper,
we consider semistochastic quadratic bound (SQB) methods for maximum likelihood
inference based on partition function optimization. Batch methods based on the
quadratic bound were recently proposed for this class of problems, and
performed favorably in comparison to state-of-the-art techniques.
Semistochastic methods fall in between batch algorithms, which use all the
data, and stochastic gradient type methods, which use small random selections
at each iteration. We build semistochastic quadratic bound-based methods, and
prove both global convergence (to a stationary point) under very weak
assumptions, and linear convergence rate under stronger assumptions on the
objective. To make the proposed methods faster and more stable, we consider
inexact subproblem minimization and batch-size selection schemes. The efficacy
of SQB methods is demonstrated via comparison with several state-of-the-art
techniques on commonly used datasets.Comment: 11 pages, 1 figur
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
We consider the problem of minimizing the sum of two convex functions: one is
the average of a large number of smooth component functions, and the other is a
general convex function that admits a simple proximal mapping. We assume the
whole objective function is strongly convex. Such problems often arise in
machine learning, known as regularized empirical risk minimization. We propose
and analyze a new proximal stochastic gradient method, which uses a multi-stage
scheme to progressively reduce the variance of the stochastic gradient. While
each iteration of this algorithm has similar cost as the classical stochastic
gradient method (or incremental gradient method), we show that the expected
objective value converges to the optimum at a geometric rate. The overall
complexity of this method is much lower than both the proximal full gradient
method and the standard proximal stochastic gradient method
- …