162 research outputs found

    Sparse Recovery via Differential Inclusions

    Full text link
    In this paper, we recover sparse signals from their noisy linear measurements by solving nonlinear differential inclusions, which is based on the notion of inverse scale space (ISS) developed in applied mathematics. Our goal here is to bring this idea to address a challenging problem in statistics, \emph{i.e.} finding the oracle estimator which is unbiased and sign-consistent using dynamics. We call our dynamics \emph{Bregman ISS} and \emph{Linearized Bregman ISS}. A well-known shortcoming of LASSO and any convex regularization approaches lies in the bias of estimators. However, we show that under proper conditions, there exists a bias-free and sign-consistent point on the solution paths of such dynamics, which corresponds to a signal that is the unbiased estimate of the true signal and whose entries have the same signs as those of the true signs, \emph{i.e.} the oracle estimator. Therefore, their solution paths are regularization paths better than the LASSO regularization path, since the points on the latter path are biased when sign-consistency is reached. We also show how to efficiently compute their solution paths in both continuous and discretized settings: the full solution paths can be exactly computed piece by piece, and a discretization leads to \emph{Linearized Bregman iteration}, which is a simple iterative thresholding rule and easy to parallelize. Theoretical guarantees such as sign-consistency and minimax optimal l2l_2-error bounds are established in both continuous and discrete settings for specific points on the paths. Early-stopping rules for identifying these points are given. The key treatment relies on the development of differential inequalities for differential inclusions and their discretizations, which extends the previous results and leads to exponentially fast recovering of sparse signals before selecting wrong ones.Comment: In Applied and Computational Harmonic Analysis, 201

    Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

    Full text link
    This paper studies the long-existing idea of adding a nice smooth function to "smooth" a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of ∣∣x∣∣1+1/(2α)∣∣x∣∣22||x||_1+1/(2\alpha)||x||_2^2, where xx is a vector, as well as the minimization of ∣∣X∣∣∗+1/(2α)∣∣X∣∣F2||X||_*+1/(2\alpha)||X||_F^2, where XX is a matrix and ∣∣X∣∣∗||X||_* and ∣∣X∣∣F||X||_F are the nuclear and Frobenius norms of XX, respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing ∣∣x∣∣1||x||_1 and ∣∣X∣∣∗||X||_* under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector x0x^0, minimizing ∣∣x∣∣1+1/(2α)∣∣x∣∣22||x||_1+1/(2\alpha)||x||_2^2 returns (nearly) the same solution as minimizing ∣∣x∣∣1||x||_1 almost whenever α≥10∣∣x0∣∣∞\alpha\ge 10||x^0||_\infty. The same relation also holds between minimizing ∣∣X∣∣∗+1/(2α)∣∣X∣∣F2||X||_*+1/(2\alpha)||X||_F^2 and minimizing ∣∣X∣∣∗||X||_* for recovering a (nearly) low-rank matrix X0X^0, if α≥10∣∣X0∣∣2\alpha\ge 10||X^0||_2. Furthermore, we show that the linearized Bregman algorithm for minimizing ∣∣x∣∣1+1/(2α)∣∣x∣∣22||x||_1+1/(2\alpha)||x||_2^2 subject to Ax=bAx=b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on AA. To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.Comment: arXiv admin note: text overlap with arXiv:1207.5326 by other author

    The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices

    Full text link
    This paper proposes scalable and fast algorithms for solving the Robust PCA problem, namely recovering a low-rank matrix with an unknown fraction of its entries being arbitrarily corrupted. This problem arises in many applications, such as image processing, web data ranking, and bioinformatic data analysis. It was recently shown that under surprisingly broad conditions, the Robust PCA problem can be exactly solved via convex optimization that minimizes a combination of the nuclear norm and the â„“1\ell^1-norm . In this paper, we apply the method of augmented Lagrange multipliers (ALM) to solve this convex program. As the objective function is non-smooth, we show how to extend the classical analysis of ALM to such new objective functions and prove the optimality of the proposed algorithms and characterize their convergence rate. Empirically, the proposed new algorithms can be more than five times faster than the previous state-of-the-art algorithms for Robust PCA, such as the accelerated proximal gradient (APG) algorithm. Moreover, the new algorithms achieve higher precision, yet being less storage/memory demanding. We also show that the ALM technique can be used to solve the (related but somewhat simpler) matrix completion problem and obtain rather promising results too. We further prove the necessary and sufficient condition for the inexact ALM to converge globally. Matlab code of all algorithms discussed are available at http://perception.csl.illinois.edu/matrix-rank/home.htmlComment: Please cite "Zhouchen Lin, Risheng Liu, and Zhixun Su, Linearized Alternating Direction Method with Adaptive Penalty for Low Rank Representation, NIPS 2011." (available at arXiv:1109.0367) instead for a more general method called Linearized Alternating Direction Method This manuscript first appeared as University of Illinois at Urbana-Champaign technical report #UILU-ENG-09-2215 in October 2009 Zhouchen Lin, Risheng Liu, and Zhixun Su, Linearized Alternating Direction Method with Adaptive Penalty for Low Rank Representation, NIPS 2011. (available at http://arxiv.org/abs/1109.0367

    Accelerated Linearized Bregman Method

    Full text link
    In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB method requires O(1/ϵ)O(1/\epsilon) iterations to obtain an ϵ\epsilon-optimal solution and the ALB algorithm reduces this iteration complexity to O(1/ϵ)O(1/\sqrt{\epsilon}) while requiring almost the same computational effort on each iteration. Numerical results on compressed sensing and matrix completion problems are presented that demonstrate that the ALB method can be significantly faster than the LB method
    • …
    corecore