23,944 research outputs found
Tensor Rank is Hard to Approximate
We prove that approximating the rank of a 3-tensor to within a factor of 1 + 1/1852 - delta, for any delta > 0, is NP-hard over any field. We do this via reduction from bounded occurrence 2-SAT
Tensor completion in hierarchical tensor representations
Compressed sensing extends from the recovery of sparse vectors from
undersampled measurements via efficient algorithms to the recovery of matrices
of low rank from incomplete information. Here we consider a further extension
to the reconstruction of tensors of low multi-linear rank in recently
introduced hierarchical tensor formats from a small number of measurements.
Hierarchical tensors are a flexible generalization of the well-known Tucker
representation, which have the advantage that the number of degrees of freedom
of a low rank tensor does not scale exponentially with the order of the tensor.
While corresponding tensor decompositions can be computed efficiently via
successive applications of (matrix) singular value decompositions, some
important properties of the singular value decomposition do not extend from the
matrix to the tensor case. This results in major computational and theoretical
difficulties in designing and analyzing algorithms for low rank tensor
recovery. For instance, a canonical analogue of the tensor nuclear norm is
NP-hard to compute in general, which is in stark contrast to the matrix case.
In this book chapter we consider versions of iterative hard thresholding
schemes adapted to hierarchical tensor formats. A variant builds on methods
from Riemannian optimization and uses a retraction mapping from the tangent
space of the manifold of low rank tensors back to this manifold. We provide
first partial convergence results based on a tensor version of the restricted
isometry property (TRIP) of the measurement map. Moreover, an estimate of the
number of measurements is provided that ensures the TRIP of a given tensor rank
with high probability for Gaussian measurement maps.Comment: revised version, to be published in Compressed Sensing and Its
Applications (edited by H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral
Low rank matrix recovery from rank one measurements
We study the recovery of Hermitian low rank matrices from undersampled measurements via nuclear norm minimization. We
consider the particular scenario where the measurements are Frobenius inner
products with random rank-one matrices of the form for some
measurement vectors , i.e., the measurements are given by . The case where the matrix to be recovered
is of rank one reduces to the problem of phaseless estimation (from
measurements, via the PhaseLift approach,
which has been introduced recently. We derive bounds for the number of
measurements that guarantee successful uniform recovery of Hermitian rank
matrices, either for the vectors , , being chosen independently
at random according to a standard Gaussian distribution, or being sampled
independently from an (approximate) complex projective -design with .
In the Gaussian case, we require measurements, while in the case
of -designs we need . Our results are uniform in the
sense that one random choice of the measurement vectors guarantees
recovery of all rank -matrices simultaneously with high probability.
Moreover, we prove robustness of recovery under perturbation of the
measurements by noise. The result for approximate -designs generalizes and
improves a recent bound on phase retrieval due to Gross, Kueng and Krahmer. In
addition, it has applications in quantum state tomography. Our proofs employ
the so-called bowling scheme which is based on recent ideas by Mendelson and
Koltchinskii.Comment: 24 page
- …