We consider ``one-at-a-time'' coordinate-wise descent algorithms for a class
of convex optimization problems. An algorithm of this kind has been proposed
for the L1-penalized regression (lasso) in the literature, but it seems to
have been largely ignored. Indeed, it seems that coordinate-wise algorithms are
not often used in convex optimization. We show that this algorithm is very
competitive with the well-known LARS (or homotopy) procedure in large lasso
problems, and that it can be applied to related methods such as the garotte and
elastic net. It turns out that coordinate-wise descent does not work in the
``fused lasso,'' however, so we derive a generalized algorithm that yields the
solution in much less time that a standard convex optimizer. Finally, we
generalize the procedure to the two-dimensional fused lasso, and demonstrate
its performance on some image smoothing problems.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS131 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org