In this paper, we propose a general framework to accelerate significantly the
algorithms for nonnegative matrix factorization (NMF). This framework is
inspired from the extrapolation scheme used to accelerate gradient methods in
convex optimization and from the method of parallel tangents. However, the use
of extrapolation in the context of the two-block exact coordinate descent
algorithms tackling the non-convex NMF problems is novel. We illustrate the
performance of this approach on two state-of-the-art NMF algorithms, namely,
accelerated hierarchical alternating least squares (A-HALS) and alternating
nonnegative least squares (ANLS), using synthetic, image and document data
sets.Comment: 19 pages, 6 figures, 6 tables. v2: few typos corrected, additional
comparison with the extrapolated projected gradient method of Xu and Yin
(SIAM J. on Imaging Sciences, 2013