162 research outputs found
Sparse Recovery via Differential Inclusions
In this paper, we recover sparse signals from their noisy linear measurements
by solving nonlinear differential inclusions, which is based on the notion of
inverse scale space (ISS) developed in applied mathematics. Our goal here is to
bring this idea to address a challenging problem in statistics, \emph{i.e.}
finding the oracle estimator which is unbiased and sign-consistent using
dynamics. We call our dynamics \emph{Bregman ISS} and \emph{Linearized Bregman
ISS}. A well-known shortcoming of LASSO and any convex regularization
approaches lies in the bias of estimators. However, we show that under proper
conditions, there exists a bias-free and sign-consistent point on the solution
paths of such dynamics, which corresponds to a signal that is the unbiased
estimate of the true signal and whose entries have the same signs as those of
the true signs, \emph{i.e.} the oracle estimator. Therefore, their solution
paths are regularization paths better than the LASSO regularization path, since
the points on the latter path are biased when sign-consistency is reached. We
also show how to efficiently compute their solution paths in both continuous
and discretized settings: the full solution paths can be exactly computed piece
by piece, and a discretization leads to \emph{Linearized Bregman iteration},
which is a simple iterative thresholding rule and easy to parallelize.
Theoretical guarantees such as sign-consistency and minimax optimal -error
bounds are established in both continuous and discrete settings for specific
points on the paths. Early-stopping rules for identifying these points are
given. The key treatment relies on the development of differential inequalities
for differential inclusions and their discretizations, which extends the
previous results and leads to exponentially fast recovering of sparse signals
before selecting wrong ones.Comment: In Applied and Computational Harmonic Analysis, 201
Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm
This paper studies the long-existing idea of adding a nice smooth function to
"smooth" a non-differentiable objective function in the context of sparse
optimization, in particular, the minimization of
, where is a vector, as well as the
minimization of , where is a matrix and
and are the nuclear and Frobenius norms of ,
respectively. We show that they can efficiently recover sparse vectors and
low-rank matrices. In particular, they enjoy exact and stable recovery
guarantees similar to those known for minimizing and under
the conditions on the sensing operator such as its null-space property,
restricted isometry property, spherical section property, or RIPless property.
To recover a (nearly) sparse vector , minimizing
returns (nearly) the same solution as minimizing
almost whenever . The same relation also
holds between minimizing and minimizing
for recovering a (nearly) low-rank matrix , if . Furthermore, we show that the linearized Bregman algorithm for
minimizing subject to enjoys global
linear convergence as long as a nonzero solution exists, and we give an
explicit rate of convergence. The convergence property does not require a
solution solution or any properties on . To our knowledge, this is the best
known global convergence result for first-order sparse optimization algorithms.Comment: arXiv admin note: text overlap with arXiv:1207.5326 by other author
The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices
This paper proposes scalable and fast algorithms for solving the Robust PCA
problem, namely recovering a low-rank matrix with an unknown fraction of its
entries being arbitrarily corrupted. This problem arises in many applications,
such as image processing, web data ranking, and bioinformatic data analysis. It
was recently shown that under surprisingly broad conditions, the Robust PCA
problem can be exactly solved via convex optimization that minimizes a
combination of the nuclear norm and the -norm . In this paper, we apply
the method of augmented Lagrange multipliers (ALM) to solve this convex
program. As the objective function is non-smooth, we show how to extend the
classical analysis of ALM to such new objective functions and prove the
optimality of the proposed algorithms and characterize their convergence rate.
Empirically, the proposed new algorithms can be more than five times faster
than the previous state-of-the-art algorithms for Robust PCA, such as the
accelerated proximal gradient (APG) algorithm. Moreover, the new algorithms
achieve higher precision, yet being less storage/memory demanding. We also show
that the ALM technique can be used to solve the (related but somewhat simpler)
matrix completion problem and obtain rather promising results too. We further
prove the necessary and sufficient condition for the inexact ALM to converge
globally. Matlab code of all algorithms discussed are available at
http://perception.csl.illinois.edu/matrix-rank/home.htmlComment: Please cite "Zhouchen Lin, Risheng Liu, and Zhixun Su, Linearized
Alternating Direction Method with Adaptive Penalty for Low Rank
Representation, NIPS 2011." (available at arXiv:1109.0367) instead for a more
general method called Linearized Alternating Direction Method This manuscript
first appeared as University of Illinois at Urbana-Champaign technical report
#UILU-ENG-09-2215 in October 2009 Zhouchen Lin, Risheng Liu, and Zhixun Su,
Linearized Alternating Direction Method with Adaptive Penalty for Low Rank
Representation, NIPS 2011. (available at http://arxiv.org/abs/1109.0367
Accelerated Linearized Bregman Method
In this paper, we propose and analyze an accelerated linearized Bregman (ALB)
method for solving the basis pursuit and related sparse optimization problems.
This accelerated algorithm is based on the fact that the linearized Bregman
(LB) algorithm is equivalent to a gradient descent method applied to a certain
dual formulation. We show that the LB method requires
iterations to obtain an -optimal solution and the ALB algorithm
reduces this iteration complexity to while requiring
almost the same computational effort on each iteration. Numerical results on
compressed sensing and matrix completion problems are presented that
demonstrate that the ALB method can be significantly faster than the LB method
- …