378 research outputs found

    Low rank matrix recovery from rank one measurements

    Full text link
    We study the recovery of Hermitian low rank matrices XCn×nX \in \mathbb{C}^{n \times n} from undersampled measurements via nuclear norm minimization. We consider the particular scenario where the measurements are Frobenius inner products with random rank-one matrices of the form ajaja_j a_j^* for some measurement vectors a1,...,ama_1,...,a_m, i.e., the measurements are given by yj=tr(Xajaj)y_j = \mathrm{tr}(X a_j a_j^*). The case where the matrix X=xxX=x x^* to be recovered is of rank one reduces to the problem of phaseless estimation (from measurements, yj=x,aj2y_j = |\langle x,a_j\rangle|^2 via the PhaseLift approach, which has been introduced recently. We derive bounds for the number mm of measurements that guarantee successful uniform recovery of Hermitian rank rr matrices, either for the vectors aja_j, j=1,...,mj=1,...,m, being chosen independently at random according to a standard Gaussian distribution, or aja_j being sampled independently from an (approximate) complex projective tt-design with t=4t=4. In the Gaussian case, we require mCrnm \geq C r n measurements, while in the case of 44-designs we need mCrnlog(n)m \geq Cr n \log(n). Our results are uniform in the sense that one random choice of the measurement vectors aja_j guarantees recovery of all rank rr-matrices simultaneously with high probability. Moreover, we prove robustness of recovery under perturbation of the measurements by noise. The result for approximate 44-designs generalizes and improves a recent bound on phase retrieval due to Gross, Kueng and Krahmer. In addition, it has applications in quantum state tomography. Our proofs employ the so-called bowling scheme which is based on recent ideas by Mendelson and Koltchinskii.Comment: 24 page

    Robust Low-Rank Subspace Segmentation with Semidefinite Guarantees

    Full text link
    Recently there is a line of research work proposing to employ Spectral Clustering (SC) to segment (group){Throughout the paper, we use segmentation, clustering, and grouping, and their verb forms, interchangeably.} high-dimensional structural data such as those (approximately) lying on subspaces {We follow {liu2010robust} and use the term "subspace" to denote both linear subspaces and affine subspaces. There is a trivial conversion between linear subspaces and affine subspaces as mentioned therein.} or low-dimensional manifolds. By learning the affinity matrix in the form of sparse reconstruction, techniques proposed in this vein often considerably boost the performance in subspace settings where traditional SC can fail. Despite the success, there are fundamental problems that have been left unsolved: the spectrum property of the learned affinity matrix cannot be gauged in advance, and there is often one ugly symmetrization step that post-processes the affinity for SC input. Hence we advocate to enforce the symmetric positive semidefinite constraint explicitly during learning (Low-Rank Representation with Positive SemiDefinite constraint, or LRR-PSD), and show that factually it can be solved in an exquisite scheme efficiently instead of general-purpose SDP solvers that usually scale up poorly. We provide rigorous mathematical derivations to show that, in its canonical form, LRR-PSD is equivalent to the recently proposed Low-Rank Representation (LRR) scheme {liu2010robust}, and hence offer theoretic and practical insights to both LRR-PSD and LRR, inviting future research. As per the computational cost, our proposal is at most comparable to that of LRR, if not less. We validate our theoretic analysis and optimization scheme by experiments on both synthetic and real data sets.Comment: 10 pages, 4 figures. Accepted by ICDM Workshop on Optimization Based Methods for Emerging Data Mining Problems (OEDM), 2010. Main proof simplified and typos corrected. Experimental data slightly adde
    corecore