In this work we introduce a new optimisation method called SAGA in the spirit
of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient
algorithms with fast linear convergence rates. SAGA improves on the theory
behind SAG and SVRG, with better theoretical convergence rates, and has support
for composite objectives where a proximal operator is used on the regulariser.
Unlike SDCA, SAGA supports non-strongly convex problems directly, and is
adaptive to any inherent strong convexity of the problem. We give experimental
results showing the effectiveness of our method.Comment: Advances In Neural Information Processing Systems, Nov 2014,
Montreal, Canad