112 research outputs found
Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation
Low-rank representation (LRR) is an effective method for subspace clustering
and has found wide applications in computer vision and machine learning. The
existing LRR solver is based on the alternating direction method (ADM). It
suffers from computation complexity due to the matrix-matrix
multiplications and matrix inversions, even if partial SVD is used. Moreover,
introducing auxiliary variables also slows down the convergence. Such a heavy
computation load prevents LRR from large scale applications. In this paper, we
generalize ADM by linearizing the quadratic penalty term and allowing the
penalty to change adaptively. We also propose a novel rule to update the
penalty such that the convergence is fast. With our linearized ADM with
adaptive penalty (LADMAP) method, it is unnecessary to introduce auxiliary
variables and invert matrices. The matrix-matrix multiplications are further
alleviated by using the skinny SVD representation technique. As a result, we
arrive at an algorithm for LRR with complexity , where is the rank
of the representation matrix. Numerical experiments verify that for LRR our
LADMAP method is much faster than state-of-the-art algorithms. Although we only
present the results on LRR, LADMAP actually can be applied to solving more
general convex programs.Comment: Manuscript accepted by NIPS 201
The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices
This paper proposes scalable and fast algorithms for solving the Robust PCA
problem, namely recovering a low-rank matrix with an unknown fraction of its
entries being arbitrarily corrupted. This problem arises in many applications,
such as image processing, web data ranking, and bioinformatic data analysis. It
was recently shown that under surprisingly broad conditions, the Robust PCA
problem can be exactly solved via convex optimization that minimizes a
combination of the nuclear norm and the -norm . In this paper, we apply
the method of augmented Lagrange multipliers (ALM) to solve this convex
program. As the objective function is non-smooth, we show how to extend the
classical analysis of ALM to such new objective functions and prove the
optimality of the proposed algorithms and characterize their convergence rate.
Empirically, the proposed new algorithms can be more than five times faster
than the previous state-of-the-art algorithms for Robust PCA, such as the
accelerated proximal gradient (APG) algorithm. Moreover, the new algorithms
achieve higher precision, yet being less storage/memory demanding. We also show
that the ALM technique can be used to solve the (related but somewhat simpler)
matrix completion problem and obtain rather promising results too. We further
prove the necessary and sufficient condition for the inexact ALM to converge
globally. Matlab code of all algorithms discussed are available at
http://perception.csl.illinois.edu/matrix-rank/home.htmlComment: Please cite "Zhouchen Lin, Risheng Liu, and Zhixun Su, Linearized
Alternating Direction Method with Adaptive Penalty for Low Rank
Representation, NIPS 2011." (available at arXiv:1109.0367) instead for a more
general method called Linearized Alternating Direction Method This manuscript
first appeared as University of Illinois at Urbana-Champaign technical report
#UILU-ENG-09-2215 in October 2009 Zhouchen Lin, Risheng Liu, and Zhixun Su,
Linearized Alternating Direction Method with Adaptive Penalty for Low Rank
Representation, NIPS 2011. (available at http://arxiv.org/abs/1109.0367
- …