Recently it has been shown that the step sizes of a family of variance
reduced gradient methods called the JacSketch methods depend on the expected
smoothness constant. In particular, if this expected smoothness constant could
be calculated a priori, then one could safely set much larger step sizes which
would result in a much faster convergence rate. We fill in this gap, and
provide simple closed form expressions for the expected smoothness constant and
careful numerical experiments verifying these bounds. Using these bounds, and
since the SAGA algorithm is part of this JacSketch family, we suggest a new
standard practice for setting the step sizes and mini-batch size for SAGA that
are competitive with a numerical grid search. Furthermore, we can now show that
the total complexity of the SAGA algorithm decreases linearly in the mini-batch
size up to a pre-defined value: the optimal mini-batch size. This is a rare
result in the stochastic variance reduced literature, only previously shown for
the Katyusha algorithm. Finally we conjecture that this is the case for many
other stochastic variance reduced methods and that our bounds and analysis of
the expected smoothness constant is key to extending these results.Comment: 34 pages, 27 figure