953 research outputs found
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
The affine rank minimization problem consists of finding a matrix of minimum
rank that satisfies a given system of linear equality constraints. Such
problems have appeared in the literature of a diverse set of fields including
system identification and control, Euclidean embedding, and collaborative
filtering. Although specific instances can often be solved with specialized
algorithms, the general affine rank minimization problem is NP-hard. In this
paper, we show that if a certain restricted isometry property holds for the
linear transformation defining the constraints, the minimum rank solution can
be recovered by solving a convex optimization problem, namely the minimization
of the nuclear norm over the given affine space. We present several random
ensembles of equations where the restricted isometry property holds with
overwhelming probability. The techniques used in our analysis have strong
parallels in the compressed sensing framework. We discuss how affine rank
minimization generalizes this pre-existing concept and outline a dictionary
relating concepts from cardinality minimization to those of rank minimization
Guaranteed Rank Minimization via Singular Value Projection
Minimizing the rank of a matrix subject to affine constraints is a
fundamental problem with many important applications in machine learning and
statistics. In this paper we propose a simple and fast algorithm SVP (Singular
Value Projection) for rank minimization with affine constraints (ARMP) and show
that SVP recovers the minimum rank solution for affine constraints that satisfy
the "restricted isometry property" and show robustness of our method to noise.
Our results improve upon a recent breakthrough by Recht, Fazel and Parillo
(RFP07) and Lee and Bresler (LB09) in three significant ways:
1) our method (SVP) is significantly simpler to analyze and easier to
implement,
2) we give recovery guarantees under strictly weaker isometry assumptions
3) we give geometric convergence guarantees for SVP even in presense of noise
and, as demonstrated empirically, SVP is significantly faster on real-world and
synthetic problems.
In addition, we address the practically important problem of low-rank matrix
completion (MCP), which can be seen as a special case of ARMP. We empirically
demonstrate that our algorithm recovers low-rank incoherent matrices from an
almost optimal number of uniformly sampled entries. We make partial progress
towards proving exact recovery and provide some intuition for the strong
performance of SVP applied to matrix completion by showing a more restricted
isometry property. Our algorithm outperforms existing methods, such as those of
\cite{RFP07,CR08,CT09,CCS08,KOM09,LB09}, for ARMP and the matrix-completion
problem by an order of magnitude and is also significantly more robust to
noise.Comment: An earlier version of this paper was submitted to NIPS-2009 on June
5, 200
Recovery of Low-Rank Matrices under Affine Constraints via a Smoothed Rank Function
In this paper, the problem of matrix rank minimization under affine
constraints is addressed. The state-of-the-art algorithms can recover matrices
with a rank much less than what is sufficient for the uniqueness of the
solution of this optimization problem. We propose an algorithm based on a
smooth approximation of the rank function, which practically improves recovery
limits on the rank of the solution. This approximation leads to a non-convex
program; thus, to avoid getting trapped in local solutions, we use the
following scheme. Initially, a rough approximation of the rank function subject
to the affine constraints is optimized. As the algorithm proceeds, finer
approximations of the rank are optimized and the solver is initialized with the
solution of the previous approximation until reaching the desired accuracy.
On the theoretical side, benefiting from the spherical section property, we
will show that the sequence of the solutions of the approximating function
converges to the minimum rank solution. On the experimental side, it will be
shown that the proposed algorithm, termed SRF standing for Smoothed Rank
Function, can recover matrices which are unique solutions of the rank
minimization problem and yet not recoverable by nuclear norm minimization.
Furthermore, it will be demonstrated that, in completing partially observed
matrices, the accuracy of SRF is considerably and consistently better than some
famous algorithms when the number of revealed entries is close to the minimum
number of parameters that uniquely represent a low-rank matrix.Comment: Accepted in IEEE TSP on December 4th, 201
A Singular Value Thresholding Algorithm for Matrix Completion
This paper introduces a novel algorithm to approximate the matrix with minimum
nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood
as the convex relaxation of a rank minimization problem and arises in many important
applications as in the task of recovering a large matrix from a small subset of its entries (the famous
Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable
to large problems of this kind with over a million unknown entries. This paper develops a simple
first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in
which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices
{X^k,Y^k}, and at each step mainly performs a soft-thresholding operation on the singular values
of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix
completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix;
the second is that the rank of the iterates {X^k} is empirically nondecreasing. Both these facts allow
the algorithm to make use of very minimal storage space and keep the computational cost of each
iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence
of iterates converges. On the practical side, we provide numerical examples in which 1,000 × 1,000
matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate
that our approach is amenable to very large scale problems by recovering matrices of rank about
10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are
connected with the recent literature on linearized Bregman iterations for ℓ_1 minimization, and we
develop a framework in which one can understand these algorithms in terms of well-known Lagrange
multiplier algorithms
Exploring Algorithmic Limits of Matrix Rank Minimization under Affine Constraints
Many applications require recovering a matrix of minimal rank within an
affine constraint set, with matrix completion a notable special case. Because
the problem is NP-hard in general, it is common to replace the matrix rank with
the nuclear norm, which acts as a convenient convex surrogate. While elegant
theoretical conditions elucidate when this replacement is likely to be
successful, they are highly restrictive and convex algorithms fail when the
ambient rank is too high or when the constraint set is poorly structured.
Non-convex alternatives fare somewhat better when carefully tuned; however,
convergence to locally optimal solutions remains a continuing source of
failure. Against this backdrop we derive a deceptively simple and
parameter-free probabilistic PCA-like algorithm that is capable, over a wide
battery of empirical tests, of successful recovery even at the theoretical
limit where the number of measurements equal the degrees of freedom in the
unknown low-rank matrix. Somewhat surprisingly, this is possible even when the
affine constraint set is highly ill-conditioned. While proving general recovery
guarantees remains evasive for non-convex algorithms, Bayesian-inspired or
otherwise, we nonetheless show conditions whereby the underlying cost function
has a unique stationary point located at the global optimum; no existing cost
function we are aware of satisfies this same property. We conclude with a
simple computer vision application involving image rectification and a standard
collaborative filtering benchmark
Chordal Decomposition in Rank Minimized Semidefinite Programs with Applications to Subspace Clustering
Semidefinite programs (SDPs) often arise in relaxations of some NP-hard
problems, and if the solution of the SDP obeys certain rank constraints, the
relaxation will be tight. Decomposition methods based on chordal sparsity have
already been applied to speed up the solution of sparse SDPs, but methods for
dealing with rank constraints are underdeveloped. This paper leverages a
minimum rank completion result to decompose the rank constraint on a single
large matrix into multiple rank constraints on a set of smaller matrices. The
re-weighted heuristic is used as a proxy for rank, and the specific form of the
heuristic preserves the sparsity pattern between iterations. Implementations of
rank-minimized SDPs through interior-point and first-order algorithms are
discussed. The problem of subspace clustering is used to demonstrate the
computational improvement of the proposed method.Comment: 6 pages, 6 figure
Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and Applications
Robust Principal Component Analysis (RPCA) via rank minimization is a
powerful tool for recovering underlying low-rank structure of clean data
corrupted with sparse noise/outliers. In many low-level vision problems, not
only it is known that the underlying structure of clean data is low-rank, but
the exact rank of clean data is also known. Yet, when applying conventional
rank minimization for those problems, the objective function is formulated in a
way that does not fully utilize a priori target rank information about the
problems. This observation motivates us to investigate whether there is a
better alternative solution when using rank minimization. In this paper,
instead of minimizing the nuclear norm, we propose to minimize the partial sum
of singular values, which implicitly encourages the target rank constraint. Our
experimental analyses show that, when the number of samples is deficient, our
approach leads to a higher success rate than conventional rank minimization,
while the solutions obtained by the two approaches are almost identical when
the number of samples is more than sufficient. We apply our approach to various
low-level vision problems, e.g. high dynamic range imaging, motion edge
detection, photometric stereo, image alignment and recovery, and show that our
results outperform those obtained by the conventional nuclear norm rank
minimization method.Comment: Accepted in Transactions on Pattern Analysis and Machine Intelligence
(TPAMI). To appea
- …