4,206 research outputs found
Truncated Nuclear Norm Minimization for Image Restoration Based On Iterative Support Detection
Recovering a large matrix from limited measurements is a challenging task
arising in many real applications, such as image inpainting, compressive
sensing and medical imaging, and this kind of problems are mostly formulated as
low-rank matrix approximation problems. Due to the rank operator being
non-convex and discontinuous, most of the recent theoretical studies use the
nuclear norm as a convex relaxation and the low-rank matrix recovery problem is
solved through minimization of the nuclear norm regularized problem. However, a
major limitation of nuclear norm minimization is that all the singular values
are simultaneously minimized and the rank may not be well approximated
\cite{hu2012fast}. Correspondingly, in this paper, we propose a new multi-stage
algorithm, which makes use of the concept of Truncated Nuclear Norm
Regularization (TNNR) proposed in \citep{hu2012fast} and Iterative Support
Detection (ISD) proposed in \citep{wang2010sparse} to overcome the above
limitation. Besides matrix completion problems considered in
\citep{hu2012fast}, the proposed method can be also extended to the general
low-rank matrix recovery problems. Extensive experiments well validate the
superiority of our new algorithms over other state-of-the-art methods
An Augmented Lagrangian Approach to the Constrained Optimization Formulation of Imaging Inverse Problems
We propose a new fast algorithm for solving one of the standard approaches to
ill-posed linear inverse problems (IPLIP), where a (possibly non-smooth)
regularizer is minimized under the constraint that the solution explains the
observations sufficiently well. Although the regularizer and constraint are
usually convex, several particular features of these problems (huge
dimensionality, non-smoothness) preclude the use of off-the-shelf optimization
tools and have stimulated a considerable amount of research. In this paper, we
propose a new efficient algorithm to handle one class of constrained problems
(often known as basis pursuit denoising) tailored to image recovery
applications. The proposed algorithm, which belongs to the family of augmented
Lagrangian methods, can be used to deal with a variety of imaging IPLIP,
including deconvolution and reconstruction from compressive observations (such
as MRI), using either total-variation or wavelet-based (or, more generally,
frame-based) regularization. The proposed algorithm is an instance of the
so-called "alternating direction method of multipliers", for which convergence
sufficient conditions are known; we show that these conditions are satisfied by
the proposed algorithm. Experiments on a set of image restoration and
reconstruction benchmark problems show that the proposed algorithm is a strong
contender for the state-of-the-art.Comment: 13 pages, 8 figure, 8 tables. Submitted to the IEEE Transactions on
Image Processin
A Singular Value Thresholding Algorithm for Matrix Completion
This paper introduces a novel algorithm to approximate the matrix with minimum
nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood
as the convex relaxation of a rank minimization problem and arises in many important
applications as in the task of recovering a large matrix from a small subset of its entries (the famous
Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable
to large problems of this kind with over a million unknown entries. This paper develops a simple
first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in
which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices
{X^k,Y^k}, and at each step mainly performs a soft-thresholding operation on the singular values
of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix
completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix;
the second is that the rank of the iterates {X^k} is empirically nondecreasing. Both these facts allow
the algorithm to make use of very minimal storage space and keep the computational cost of each
iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence
of iterates converges. On the practical side, we provide numerical examples in which 1,000 × 1,000
matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate
that our approach is amenable to very large scale problems by recovering matrices of rank about
10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are
connected with the recent literature on linearized Bregman iterations for ℓ_1 minimization, and we
develop a framework in which one can understand these algorithms in terms of well-known Lagrange
multiplier algorithms
Weighted Schatten -Norm Minimization for Image Denoising and Background Subtraction
Low rank matrix approximation (LRMA), which aims to recover the underlying
low rank matrix from its degraded observation, has a wide range of applications
in computer vision. The latest LRMA methods resort to using the nuclear norm
minimization (NNM) as a convex relaxation of the nonconvex rank minimization.
However, NNM tends to over-shrink the rank components and treats the different
rank components equally, limiting its flexibility in practical applications. We
propose a more flexible model, namely the Weighted Schatten -Norm
Minimization (WSNM), to generalize the NNM to the Schatten -norm
minimization with weights assigned to different singular values. The proposed
WSNM not only gives better approximation to the original low-rank assumption,
but also considers the importance of different rank components. We analyze the
solution of WSNM and prove that, under certain weights permutation, WSNM can be
equivalently transformed into independent non-convex -norm subproblems,
whose global optimum can be efficiently solved by generalized iterated
shrinkage algorithm. We apply WSNM to typical low-level vision problems, e.g.,
image denoising and background subtraction. Extensive experimental results
show, both qualitatively and quantitatively, that the proposed WSNM can more
effectively remove noise, and model complex and dynamic scenes compared with
state-of-the-art methods.Comment: 13 pages, 11 figure
- …