1,569 research outputs found

    A Singular Value Thresholding Algorithm for Matrix Completion

    Get PDF
    This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices {X^k,Y^k}, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X^k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which 1,000 × 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for ℓ_1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms

    Matrix recovery using Split Bregman

    Full text link
    In this paper we address the problem of recovering a matrix, with inherent low rank structure, from its lower dimensional projections. This problem is frequently encountered in wide range of areas including pattern recognition, wireless sensor networks, control systems, recommender systems, image/video reconstruction etc. Both in theory and practice, the most optimal way to solve the low rank matrix recovery problem is via nuclear norm minimization. In this paper, we propose a Split Bregman algorithm for nuclear norm minimization. The use of Bregman technique improves the convergence speed of our algorithm and gives a higher success rate. Also, the accuracy of reconstruction is much better even for cases where small number of linear measurements are available. Our claim is supported by empirical results obtained using our algorithm and its comparison to other existing methods for matrix recovery. The algorithms are compared on the basis of NMSE, execution time and success rate for varying ranks and sampling ratios

    Accelerated Linearized Bregman Method

    Full text link
    In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB method requires O(1/ϵ)O(1/\epsilon) iterations to obtain an ϵ\epsilon-optimal solution and the ALB algorithm reduces this iteration complexity to O(1/ϵ)O(1/\sqrt{\epsilon}) while requiring almost the same computational effort on each iteration. Numerical results on compressed sensing and matrix completion problems are presented that demonstrate that the ALB method can be significantly faster than the LB method

    An Augmented Lagrangian Approach to the Constrained Optimization Formulation of Imaging Inverse Problems

    Full text link
    We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly non-smooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, non-smoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called "alternating direction method of multipliers", for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.Comment: 13 pages, 8 figure, 8 tables. Submitted to the IEEE Transactions on Image Processin
    corecore