We consider the problem of optimizing the sum of a smooth convex function and
a non-smooth convex function using proximal-gradient methods, where an error is
present in the calculation of the gradient of the smooth term or in the
proximity operator with respect to the non-smooth term. We show that both the
basic proximal-gradient method and the accelerated proximal-gradient method
achieve the same convergence rate as in the error-free case, provided that the
errors decrease at appropriate rates.Using these rates, we perform as well as
or better than a carefully chosen fixed error level on a set of structured
sparsity problems.Comment: Neural Information Processing Systems (2011