175 research outputs found
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
Let's Make Block Coordinate Descent Go Fast: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence
Block coordinate descent (BCD) methods are widely-used for large-scale
numerical optimization because of their cheap iteration costs, low memory
requirements, amenability to parallelization, and ability to exploit problem
structure. Three main algorithmic choices influence the performance of BCD
methods: the block partitioning strategy, the block selection rule, and the
block update rule. In this paper we explore all three of these building blocks
and propose variations for each that can lead to significantly faster BCD
methods. We (i) propose new greedy block-selection strategies that guarantee
more progress per iteration than the Gauss-Southwell rule; (ii) explore
practical issues like how to implement the new rules when using "variable"
blocks; (iii) explore the use of message-passing to compute matrix or Newton
updates efficiently on huge blocks for problems with a sparse dependency
between variables; and (iv) consider optimal active manifold identification,
which leads to bounds on the "active set complexity" of BCD methods and leads
to superlinear convergence for certain problems with sparse solutions (and in
some cases finite termination at an optimal solution). We support all of our
findings with numerical results for the classic machine learning problems of
least squares, logistic regression, multi-class logistic regression, label
propagation, and L1-regularization
A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem
We consider solving the -regularized least-squares (-LS)
problem in the context of sparse recovery, for applications such as compressed
sensing. The standard proximal gradient method, also known as iterative
soft-thresholding when applied to this problem, has low computational cost per
iteration but a rather slow convergence rate. Nevertheless, when the solution
is sparse, it often exhibits fast linear convergence in the final stage. We
exploit the local linear convergence using a homotopy continuation strategy,
i.e., we solve the -LS problem for a sequence of decreasing values of
the regularization parameter, and use an approximate solution at the end of
each stage to warm start the next stage. Although similar strategies have been
studied in the literature, there have been no theoretical analysis of their
global iteration complexity. This paper shows that under suitable assumptions
for sparse recovery, the proposed homotopy strategy ensures that all iterates
along the homotopy solution path are sparse. Therefore the objective function
is effectively strongly convex along the solution path, and geometric
convergence at each stage can be established. As a result, the overall
iteration complexity of our method is for finding an
-optimal solution, which can be interpreted as global geometric rate
of convergence. We also present empirical results to support our theoretical
analysis
Iterative Soft/Hard Thresholding with Homotopy Continuation for Sparse Recovery
In this note, we analyze an iterative soft / hard thresholding algorithm with
homotopy continuation for recovering a sparse signal from noisy data
of a noise level . Under suitable regularity and sparsity conditions,
we design a path along which the algorithm can find a solution which
admits a sharp reconstruction error with an iteration complexity , where and are problem dimensionality and
controls the length of the path. Numerical examples are given to illustrate its
performance.Comment: 5 pages, 4 figure
- …