Many structured data-fitting applications require the solution of an
optimization problem involving a sum over a potentially large number of
measurements. Incremental gradient algorithms offer inexpensive iterations by
sampling a subset of the terms in the sum. These methods can make great
progress initially, but often slow as they approach a solution. In contrast,
full-gradient methods achieve steady convergence at the expense of evaluating
the full objective and gradient on each iteration. We explore hybrid methods
that exhibit the benefits of both approaches. Rate-of-convergence analysis
shows that by controlling the sample size in an incremental gradient algorithm,
it is possible to maintain the steady convergence rates of full-gradient
methods. We detail a practical quasi-Newton implementation based on this
approach. Numerical experiments illustrate its potential benefits.Comment: 26 pages. Revised proofs of Theorems 2.6 and 3.1, results unchange