43,507 research outputs found

    Tensor rank is not multiplicative under the tensor product

    Get PDF
    The tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an l-tensor. The tensor product of s and t is a (k + l)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the connection between restrictions and degenerations. A result of our study is that tensor rank is not in general multiplicative under the tensor product. This answers a question of Draisma and Saptharishi. Specifically, if a tensor t has border rank strictly smaller than its rank, then the tensor rank of t is not multiplicative under taking a sufficiently hight tensor product power. The "tensor Kronecker product" from algebraic complexity theory is related to our tensor product but different, namely it multiplies two k-tensors to get a k-tensor. Nonmultiplicativity of the tensor Kronecker product has been known since the work of Strassen. It remains an open question whether border rank and asymptotic rank are multiplicative under the tensor product. Interestingly, lower bounds on border rank obtained from generalised flattenings (including Young flattenings) multiply under the tensor product

    The border support rank of two-by-two matrix multiplication is seven

    Get PDF
    We show that the border support rank of the tensor corresponding to two-by-two matrix multiplication is seven over the complex numbers. We do this by constructing two polynomials that vanish on all complex tensors with format four-by-four-by-four and border rank at most six, but that do not vanish simultaneously on any tensor with the same support as the two-by-two matrix multiplication tensor. This extends the work of Hauenstein, Ikenmeyer, and Landsberg. We also give two proofs that the support rank of the two-by-two matrix multiplication tensor is seven over any field: one proof using a result of De Groote saying that the decomposition of this tensor is unique up to sandwiching, and another proof via the substitution method. These results answer a question asked by Cohn and Umans. Studying the border support rank of the matrix multiplication tensor is relevant for the design of matrix multiplication algorithms, because upper bounds on the border support rank of the matrix multiplication tensor lead to upper bounds on the computational complexity of matrix multiplication, via a construction of Cohn and Umans. Moreover, support rank has applications in quantum communication complexity

    The average condition number of most tensor rank decomposition problems is infinite

    Full text link
    The tensor rank decomposition, or canonical polyadic decomposition, is the decomposition of a tensor into a sum of rank-1 tensors. The condition number of the tensor rank decomposition measures the sensitivity of the rank-1 summands with respect to structured perturbations. Those are perturbations preserving the rank of the tensor that is decomposed. On the other hand, the angular condition number measures the perturbations of the rank-1 summands up to scaling. We show for random rank-2 tensors with Gaussian density that the expected value of the condition number is infinite. Under some mild additional assumption, we show that the same is true for most higher ranks r≥3r\geq 3 as well. In fact, as the dimensions of the tensor tend to infinity, asymptotically all ranks are covered by our analysis. On the contrary, we show that rank-2 Gaussian tensors have finite expected angular condition number. Our results underline the high computational complexity of computing tensor rank decompositions. We discuss consequences of our results for algorithm design and for testing algorithms that compute the CPD. Finally, we supply numerical experiments

    Nondeterministic quantum communication complexity: the cyclic equality game and iterated matrix multiplication

    Get PDF
    We study nondeterministic multiparty quantum communication with a quantum generalization of broadcasts. We show that, with number-in-hand classical inputs, the communication complexity of a Boolean function in this communication model equals the logarithm of the support rank of the corresponding tensor, whereas the approximation complexity in this model equals the logarithm of the border support rank. This characterisation allows us to prove a log-rank conjecture posed by Villagra et al. for nondeterministic multiparty quantum communication with message-passing. The support rank characterization of the communication model connects quantum communication complexity intimately to the theory of asymptotic entanglement transformation and algebraic complexity theory. In this context, we introduce the graphwise equality problem. For a cycle graph, the complexity of this communication problem is closely related to the complexity of the computational problem of multiplying matrices, or more precisely, it equals the logarithm of the asymptotic support rank of the iterated matrix multiplication tensor. We employ Strassen's laser method to show that asymptotically there exist nontrivial protocols for every odd-player cyclic equality problem. We exhibit an efficient protocol for the 5-player problem for small inputs, and we show how Young flattenings yield nontrivial complexity lower bounds

    Decomposing Overcomplete 3rd Order Tensors using Sum-of-Squares Algorithms

    Get PDF
    Tensor rank and low-rank tensor decompositions have many applications in learning and complexity theory. Most known algorithms use unfoldings of tensors and can only handle rank up to n⌊p/2⌋n^{\lfloor p/2 \rfloor} for a pp-th order tensor in Rnp\mathbb{R}^{n^p}. Previously no efficient algorithm can decompose 3rd order tensors when the rank is super-linear in the dimension. Using ideas from sum-of-squares hierarchy, we give the first quasi-polynomial time algorithm that can decompose a random 3rd order tensor decomposition when the rank is as large as n3/2/polylognn^{3/2}/\textrm{polylog} n. We also give a polynomial time algorithm for certifying the injective norm of random low rank tensors. Our tensor decomposition algorithm exploits the relationship between injective norm and the tensor components. The proof relies on interesting tools for decoupling random variables to prove better matrix concentration bounds, which can be useful in other settings
    • …
    corecore