378 research outputs found
Low rank matrix recovery from rank one measurements
We study the recovery of Hermitian low rank matrices from undersampled measurements via nuclear norm minimization. We
consider the particular scenario where the measurements are Frobenius inner
products with random rank-one matrices of the form for some
measurement vectors , i.e., the measurements are given by . The case where the matrix to be recovered
is of rank one reduces to the problem of phaseless estimation (from
measurements, via the PhaseLift approach,
which has been introduced recently. We derive bounds for the number of
measurements that guarantee successful uniform recovery of Hermitian rank
matrices, either for the vectors , , being chosen independently
at random according to a standard Gaussian distribution, or being sampled
independently from an (approximate) complex projective -design with .
In the Gaussian case, we require measurements, while in the case
of -designs we need . Our results are uniform in the
sense that one random choice of the measurement vectors guarantees
recovery of all rank -matrices simultaneously with high probability.
Moreover, we prove robustness of recovery under perturbation of the
measurements by noise. The result for approximate -designs generalizes and
improves a recent bound on phase retrieval due to Gross, Kueng and Krahmer. In
addition, it has applications in quantum state tomography. Our proofs employ
the so-called bowling scheme which is based on recent ideas by Mendelson and
Koltchinskii.Comment: 24 page
Robust Low-Rank Subspace Segmentation with Semidefinite Guarantees
Recently there is a line of research work proposing to employ Spectral
Clustering (SC) to segment (group){Throughout the paper, we use segmentation,
clustering, and grouping, and their verb forms, interchangeably.}
high-dimensional structural data such as those (approximately) lying on
subspaces {We follow {liu2010robust} and use the term "subspace" to denote both
linear subspaces and affine subspaces. There is a trivial conversion between
linear subspaces and affine subspaces as mentioned therein.} or low-dimensional
manifolds. By learning the affinity matrix in the form of sparse
reconstruction, techniques proposed in this vein often considerably boost the
performance in subspace settings where traditional SC can fail. Despite the
success, there are fundamental problems that have been left unsolved: the
spectrum property of the learned affinity matrix cannot be gauged in advance,
and there is often one ugly symmetrization step that post-processes the
affinity for SC input. Hence we advocate to enforce the symmetric positive
semidefinite constraint explicitly during learning (Low-Rank Representation
with Positive SemiDefinite constraint, or LRR-PSD), and show that factually it
can be solved in an exquisite scheme efficiently instead of general-purpose SDP
solvers that usually scale up poorly. We provide rigorous mathematical
derivations to show that, in its canonical form, LRR-PSD is equivalent to the
recently proposed Low-Rank Representation (LRR) scheme {liu2010robust}, and
hence offer theoretic and practical insights to both LRR-PSD and LRR, inviting
future research. As per the computational cost, our proposal is at most
comparable to that of LRR, if not less. We validate our theoretic analysis and
optimization scheme by experiments on both synthetic and real data sets.Comment: 10 pages, 4 figures. Accepted by ICDM Workshop on Optimization Based
Methods for Emerging Data Mining Problems (OEDM), 2010. Main proof simplified
and typos corrected. Experimental data slightly adde
- …