23,944 research outputs found

    Tensor Rank is Hard to Approximate

    Get PDF
    We prove that approximating the rank of a 3-tensor to within a factor of 1 + 1/1852 - delta, for any delta > 0, is NP-hard over any field. We do this via reduction from bounded occurrence 2-SAT

    Tensor completion in hierarchical tensor representations

    Full text link
    Compressed sensing extends from the recovery of sparse vectors from undersampled measurements via efficient algorithms to the recovery of matrices of low rank from incomplete information. Here we consider a further extension to the reconstruction of tensors of low multi-linear rank in recently introduced hierarchical tensor formats from a small number of measurements. Hierarchical tensors are a flexible generalization of the well-known Tucker representation, which have the advantage that the number of degrees of freedom of a low rank tensor does not scale exponentially with the order of the tensor. While corresponding tensor decompositions can be computed efficiently via successive applications of (matrix) singular value decompositions, some important properties of the singular value decomposition do not extend from the matrix to the tensor case. This results in major computational and theoretical difficulties in designing and analyzing algorithms for low rank tensor recovery. For instance, a canonical analogue of the tensor nuclear norm is NP-hard to compute in general, which is in stark contrast to the matrix case. In this book chapter we consider versions of iterative hard thresholding schemes adapted to hierarchical tensor formats. A variant builds on methods from Riemannian optimization and uses a retraction mapping from the tangent space of the manifold of low rank tensors back to this manifold. We provide first partial convergence results based on a tensor version of the restricted isometry property (TRIP) of the measurement map. Moreover, an estimate of the number of measurements is provided that ensures the TRIP of a given tensor rank with high probability for Gaussian measurement maps.Comment: revised version, to be published in Compressed Sensing and Its Applications (edited by H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral

    Low rank matrix recovery from rank one measurements

    Full text link
    We study the recovery of Hermitian low rank matrices XCn×nX \in \mathbb{C}^{n \times n} from undersampled measurements via nuclear norm minimization. We consider the particular scenario where the measurements are Frobenius inner products with random rank-one matrices of the form ajaja_j a_j^* for some measurement vectors a1,...,ama_1,...,a_m, i.e., the measurements are given by yj=tr(Xajaj)y_j = \mathrm{tr}(X a_j a_j^*). The case where the matrix X=xxX=x x^* to be recovered is of rank one reduces to the problem of phaseless estimation (from measurements, yj=x,aj2y_j = |\langle x,a_j\rangle|^2 via the PhaseLift approach, which has been introduced recently. We derive bounds for the number mm of measurements that guarantee successful uniform recovery of Hermitian rank rr matrices, either for the vectors aja_j, j=1,...,mj=1,...,m, being chosen independently at random according to a standard Gaussian distribution, or aja_j being sampled independently from an (approximate) complex projective tt-design with t=4t=4. In the Gaussian case, we require mCrnm \geq C r n measurements, while in the case of 44-designs we need mCrnlog(n)m \geq Cr n \log(n). Our results are uniform in the sense that one random choice of the measurement vectors aja_j guarantees recovery of all rank rr-matrices simultaneously with high probability. Moreover, we prove robustness of recovery under perturbation of the measurements by noise. The result for approximate 44-designs generalizes and improves a recent bound on phase retrieval due to Gross, Kueng and Krahmer. In addition, it has applications in quantum state tomography. Our proofs employ the so-called bowling scheme which is based on recent ideas by Mendelson and Koltchinskii.Comment: 24 page
    corecore