879 research outputs found
The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices
This paper proposes scalable and fast algorithms for solving the Robust PCA
problem, namely recovering a low-rank matrix with an unknown fraction of its
entries being arbitrarily corrupted. This problem arises in many applications,
such as image processing, web data ranking, and bioinformatic data analysis. It
was recently shown that under surprisingly broad conditions, the Robust PCA
problem can be exactly solved via convex optimization that minimizes a
combination of the nuclear norm and the -norm . In this paper, we apply
the method of augmented Lagrange multipliers (ALM) to solve this convex
program. As the objective function is non-smooth, we show how to extend the
classical analysis of ALM to such new objective functions and prove the
optimality of the proposed algorithms and characterize their convergence rate.
Empirically, the proposed new algorithms can be more than five times faster
than the previous state-of-the-art algorithms for Robust PCA, such as the
accelerated proximal gradient (APG) algorithm. Moreover, the new algorithms
achieve higher precision, yet being less storage/memory demanding. We also show
that the ALM technique can be used to solve the (related but somewhat simpler)
matrix completion problem and obtain rather promising results too. We further
prove the necessary and sufficient condition for the inexact ALM to converge
globally. Matlab code of all algorithms discussed are available at
http://perception.csl.illinois.edu/matrix-rank/home.htmlComment: Please cite "Zhouchen Lin, Risheng Liu, and Zhixun Su, Linearized
Alternating Direction Method with Adaptive Penalty for Low Rank
Representation, NIPS 2011." (available at arXiv:1109.0367) instead for a more
general method called Linearized Alternating Direction Method This manuscript
first appeared as University of Illinois at Urbana-Champaign technical report
#UILU-ENG-09-2215 in October 2009 Zhouchen Lin, Risheng Liu, and Zhixun Su,
Linearized Alternating Direction Method with Adaptive Penalty for Low Rank
Representation, NIPS 2011. (available at http://arxiv.org/abs/1109.0367
Robust Principal Component Analysis?
This paper is about a curious phenomenon. Suppose we have a data matrix,
which is the superposition of a low-rank component and a sparse component. Can
we recover each component individually? We prove that under some suitable
assumptions, it is possible to recover both the low-rank and the sparse
components exactly by solving a very convenient convex program called Principal
Component Pursuit; among all feasible decompositions, simply minimize a
weighted combination of the nuclear norm and of the L1 norm. This suggests the
possibility of a principled approach to robust principal component analysis
since our methodology and results assert that one can recover the principal
components of a data matrix even though a positive fraction of its entries are
arbitrarily corrupted. This extends to the situation where a fraction of the
entries are missing as well. We discuss an algorithm for solving this
optimization problem, and present applications in the area of video
surveillance, where our methodology allows for the detection of objects in a
cluttered background, and in the area of face recognition, where it offers a
principled way of removing shadows and specularities in images of faces
Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit
We consider the problem of recovering a low-rank matrix when some of its
entries, whose locations are not known a priori, are corrupted by errors of
arbitrarily large magnitude. It has recently been shown that this problem can
be solved efficiently and effectively by a convex program named Principal
Component Pursuit (PCP), provided that the fraction of corrupted entries and
the rank of the matrix are both sufficiently small. In this paper, we extend
that result to show that the same convex program, with a slightly improved
weighting parameter, exactly recovers the low-rank matrix even if "almost all"
of its entries are arbitrarily corrupted, provided the signs of the errors are
random. We corroborate our result with simulations on randomly generated
matrices and errors.Comment: Submitted to ISIT 201
Hyperspectral Image Restoration via Total Variation Regularized Low-rank Tensor Decomposition
Hyperspectral images (HSIs) are often corrupted by a mixture of several types
of noise during the acquisition process, e.g., Gaussian noise, impulse noise,
dead lines, stripes, and many others. Such complex noise could degrade the
quality of the acquired HSIs, limiting the precision of the subsequent
processing. In this paper, we present a novel tensor-based HSI restoration
approach by fully identifying the intrinsic structures of the clean HSI part
and the mixed noise part respectively. Specifically, for the clean HSI part, we
use tensor Tucker decomposition to describe the global correlation among all
bands, and an anisotropic spatial-spectral total variation (SSTV)
regularization to characterize the piecewise smooth structure in both spatial
and spectral domains. For the mixed noise part, we adopt the norm
regularization to detect the sparse noise, including stripes, impulse noise,
and dead pixels. Despite that TV regulariztion has the ability of removing
Gaussian noise, the Frobenius norm term is further used to model heavy Gaussian
noise for some real-world scenarios. Then, we develop an efficient algorithm
for solving the resulting optimization problem by using the augmented Lagrange
multiplier (ALM) method. Finally, extensive experiments on simulated and
real-world noise HSIs are carried out to demonstrate the superiority of the
proposed method over the existing state-of-the-art ones.Comment: 15 pages, 20 figure
- …