9,857 research outputs found
Weighted Schatten -Norm Minimization for Image Denoising and Background Subtraction
Low rank matrix approximation (LRMA), which aims to recover the underlying
low rank matrix from its degraded observation, has a wide range of applications
in computer vision. The latest LRMA methods resort to using the nuclear norm
minimization (NNM) as a convex relaxation of the nonconvex rank minimization.
However, NNM tends to over-shrink the rank components and treats the different
rank components equally, limiting its flexibility in practical applications. We
propose a more flexible model, namely the Weighted Schatten -Norm
Minimization (WSNM), to generalize the NNM to the Schatten -norm
minimization with weights assigned to different singular values. The proposed
WSNM not only gives better approximation to the original low-rank assumption,
but also considers the importance of different rank components. We analyze the
solution of WSNM and prove that, under certain weights permutation, WSNM can be
equivalently transformed into independent non-convex -norm subproblems,
whose global optimum can be efficiently solved by generalized iterated
shrinkage algorithm. We apply WSNM to typical low-level vision problems, e.g.,
image denoising and background subtraction. Extensive experimental results
show, both qualitatively and quantitatively, that the proposed WSNM can more
effectively remove noise, and model complex and dynamic scenes compared with
state-of-the-art methods.Comment: 13 pages, 11 figure
Nonconvex Nonsmooth Low-Rank Minimization via Iteratively Reweighted Nuclear Norm
The nuclear norm is widely used as a convex surrogate of the rank function in
compressive sensing for low rank matrix recovery with its applications in image
recovery and signal processing. However, solving the nuclear norm based relaxed
convex problem usually leads to a suboptimal solution of the original rank
minimization problem. In this paper, we propose to perform a family of
nonconvex surrogates of -norm on the singular values of a matrix to
approximate the rank function. This leads to a nonconvex nonsmooth minimization
problem. Then we propose to solve the problem by Iteratively Reweighted Nuclear
Norm (IRNN) algorithm. IRNN iteratively solves a Weighted Singular Value
Thresholding (WSVT) problem, which has a closed form solution due to the
special properties of the nonconvex surrogate functions. We also extend IRNN to
solve the nonconvex problem with two or more blocks of variables. In theory, we
prove that IRNN decreases the objective function value monotonically, and any
limit point is a stationary point. Extensive experiments on both synthesized
data and real images demonstrate that IRNN enhances the low-rank matrix
recovery compared with state-of-the-art convex algorithms
Generalized Nonconvex Nonsmooth Low-Rank Minimization
As surrogate functions of -norm, many nonconvex penalty functions have
been proposed to enhance the sparse vector recovery. It is easy to extend these
nonconvex penalty functions on singular values of a matrix to enhance low-rank
matrix recovery. However, different from convex optimization, solving the
nonconvex low-rank minimization problem is much more challenging than the
nonconvex sparse minimization problem. We observe that all the existing
nonconvex penalty functions are concave and monotonically increasing on
. Thus their gradients are decreasing functions. Based on this
property, we propose an Iteratively Reweighted Nuclear Norm (IRNN) algorithm to
solve the nonconvex nonsmooth low-rank minimization problem. IRNN iteratively
solves a Weighted Singular Value Thresholding (WSVT) problem. By setting the
weight vector as the gradient of the concave penalty function, the WSVT problem
has a closed form solution. In theory, we prove that IRNN decreases the
objective function value monotonically, and any limit point is a stationary
point. Extensive experiments on both synthetic data and real images demonstrate
that IRNN enhances the low-rank matrix recovery compared with state-of-the-art
convex algorithms.Comment: IEEE International Conference on Computer Vision and Pattern
Recognition, 201
Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and Applications
Robust Principal Component Analysis (RPCA) via rank minimization is a
powerful tool for recovering underlying low-rank structure of clean data
corrupted with sparse noise/outliers. In many low-level vision problems, not
only it is known that the underlying structure of clean data is low-rank, but
the exact rank of clean data is also known. Yet, when applying conventional
rank minimization for those problems, the objective function is formulated in a
way that does not fully utilize a priori target rank information about the
problems. This observation motivates us to investigate whether there is a
better alternative solution when using rank minimization. In this paper,
instead of minimizing the nuclear norm, we propose to minimize the partial sum
of singular values, which implicitly encourages the target rank constraint. Our
experimental analyses show that, when the number of samples is deficient, our
approach leads to a higher success rate than conventional rank minimization,
while the solutions obtained by the two approaches are almost identical when
the number of samples is more than sufficient. We apply our approach to various
low-level vision problems, e.g. high dynamic range imaging, motion edge
detection, photometric stereo, image alignment and recovery, and show that our
results outperform those obtained by the conventional nuclear norm rank
minimization method.Comment: Accepted in Transactions on Pattern Analysis and Machine Intelligence
(TPAMI). To appea
- …