54 research outputs found

    A condition number for the tensor rank decomposition

    Get PDF
    The tensor rank decomposition problem consists of recovering the unique set of parameters representing a robustly identifiable low-rank tensor when the coordinate representation of the tensor is presented as input. A condition number for this problem measuring the sensitivity of the parameters to an infinitesimal change to the tensor is introduced and analyzed. It is demonstrated that the absolute condition number coincides with the inverse of the least singular value of Terracini's matrix. Several basic properties of this condition number are investigated.Comment: 45 pages, 4 figure

    Most secant varieties of tangential varieties to Veronese varieties are nondefective

    Full text link
    We prove a conjecture stated by Catalisano, Geramita, and Gimigliano in 2002, which claims that the secant varieties of tangential varieties to the ddth Veronese embedding of the projective nn-space Pn\mathbb{P}^n have the expected dimension, modulo a few well-known exceptions. As Bernardi, Catalisano, Gimigliano, and Id\'a demonstrated that the proof of this conjecture may be reduced to the case of cubics, i.e., d=3d=3, the main contribution of this work is the resolution of this base case. The proposed proof proceeds by induction on the dimension nn of the projective space via a specialization argument. This reduces the proof to a large number of initial cases for the induction, which were settled using a computer-assisted proof. The individual base cases were computationally challenging problems. Indeed, the largest base case required us to deal with the tangential variety to the third Veronese embedding of P79\mathbb{P}^{79} in P88559\mathbb{P}^{88559}.Comment: 25 pages, 2 figures, extended the introduction, and added a C++ code as an ancillary fil

    A Riemannian Trust Region Method for the Canonical Tensor Rank Approximation Problem

    Full text link
    The canonical tensor rank approximation problem (TAP) consists of approximating a real-valued tensor by one of low canonical rank, which is a challenging non-linear, non-convex, constrained optimization problem, where the constraint set forms a non-smooth semi-algebraic set. We introduce a Riemannian Gauss-Newton method with trust region for solving small-scale, dense TAPs. The novelty of our approach is threefold. First, we parametrize the constraint set as the Cartesian product of Segre manifolds, hereby formulating the TAP as a Riemannian optimization problem, and we argue why this parametrization is among the theoretically best possible. Second, an original ST-HOSVD-based retraction operator is proposed. Third, we introduce a hot restart mechanism that efficiently detects when the optimization process is tending to an ill-conditioned tensor rank decomposition and which often yields a quick escape path from such spurious decompositions. Numerical experiments show improvements of up to three orders of magnitude in terms of the expected time to compute a successful solution over existing state-of-the-art methods

    The condition number of join decompositions

    Full text link
    The join set of a finite collection of smooth embedded submanifolds of a mutual vector space is defined as their Minkowski sum. Join decompositions generalize some ubiquitous decompositions in multilinear algebra, namely tensor rank, Waring, partially symmetric rank and block term decompositions. This paper examines the numerical sensitivity of join decompositions to perturbations; specifically, we consider the condition number for general join decompositions. It is characterized as a distance to a set of ill-posed points in a supplementary product of Grassmannians. We prove that this condition number can be computed efficiently as the smallest singular value of an auxiliary matrix. For some special join sets, we characterized the behavior of sequences in the join set converging to the latter's boundary points. Finally, we specialize our discussion to the tensor rank and Waring decompositions and provide several numerical experiments confirming the key results

    On the average condition number of tensor rank decompositions

    Full text link
    We compute the expected value of powers of the geometric condition number of random tensor rank decompositions. It is shown in particular that the expected value of the condition number of n1×n2×2n_1\times n_2 \times 2 tensors with a random rank-rr decomposition, given by factor matrices with independent and identically distributed standard normal entries, is infinite. This entails that it is expected and probable that such a rank-rr decomposition is sensitive to perturbations of the tensor. Moreover, it provides concrete further evidence that tensor decomposition can be a challenging problem, also from the numerical point of view. On the other hand, we provide strong theoretical and empirical evidence that tensors of size n1 × n2 × n3n_1~\times~n_2~\times~n_3 with all n1,n2,n3≥3n_1,n_2,n_3 \ge 3 have a finite average condition number. This suggests there exists a gap in the expected sensitivity of tensors between those of format n1×n2×2n_1\times n_2 \times 2 and other order-3 tensors. For establishing these results, we show that a natural weighted distance from a tensor rank decomposition to the locus of ill-posed decompositions with an infinite geometric condition number is bounded from below by the inverse of this condition number. That is, we prove one inequality towards a so-called condition number theorem for the tensor rank decomposition

    Convergence analysis of Riemannian Gauss-Newton methods and its connection with the geometric condition number

    Full text link
    We obtain estimates of the multiplicative constants appearing in local convergence results of the Riemannian Gauss-Newton method for least squares problems on manifolds and relate them to the geometric condition number of [P. B\"urgisser and F. Cucker, Condition: The Geometry of Numerical Algorithms, 2013]

    The condition number of singular subspaces, revisited

    Full text link
    I revisit the condition number of computing left and right singular subspaces from [J.-G. Sun, Perturbation analysis of singular subspaces and deflating subspaces, Numer. Math. 73(2), pp. 235--263, 1996]. For real and complex matrices, I present an alternative computation of this condition number in the Euclidean distance on the input space of matrices and the chordal, Grassmann, and Procrustes distances on the output Grassmannian manifold of linear subspaces. Up to a small factor, this condition number equals the inverse minimum singular value gap between the singular values corresponding to the selected singular subspace and those not selected.Comment: 16 page

    Effective criteria for specific identifiability of tensors and forms

    Get PDF
    In applications where the tensor rank decomposition arises, one often relies on its identifiability properties for interpreting the individual rank-11 terms appearing in the decomposition. Several criteria for identifiability have been proposed in the literature, however few results exist on how frequently they are satisfied. We propose to call a criterion effective if it is satisfied on a dense, open subset of the smallest semi-algebraic set enclosing the set of rank-rr tensors. We analyze the effectiveness of Kruskal's criterion when it is combined with reshaping. It is proved that this criterion is effective for both real and complex tensors in its entire range of applicability, which is usually much smaller than the smallest typical rank. Our proof explains when reshaping-based algorithms for computing tensor rank decompositions may be expected to recover the decomposition. Specializing the analysis to symmetric tensors or forms reveals that the reshaped Kruskal criterion may even be effective up to the smallest typical rank for some third, fourth and sixth order symmetric tensors of small dimension as well as for binary forms of degree at least three. We extended this result to 4×4×4×44 \times 4 \times 4 \times 4 symmetric tensors by analyzing the Hilbert function, resulting in a criterion for symmetric identifiability that is effective up to symmetric rank 88, which is optimal.Comment: 31 pages, 2 Macaulay2 code

    The average condition number of most tensor rank decomposition problems is infinite

    Full text link
    The tensor rank decomposition, or canonical polyadic decomposition, is the decomposition of a tensor into a sum of rank-1 tensors. The condition number of the tensor rank decomposition measures the sensitivity of the rank-1 summands with respect to structured perturbations. Those are perturbations preserving the rank of the tensor that is decomposed. On the other hand, the angular condition number measures the perturbations of the rank-1 summands up to scaling. We show for random rank-2 tensors with Gaussian density that the expected value of the condition number is infinite. Under some mild additional assumption, we show that the same is true for most higher ranks r≥3r\geq 3 as well. In fact, as the dimensions of the tensor tend to infinity, asymptotically all ranks are covered by our analysis. On the contrary, we show that rank-2 Gaussian tensors have finite expected angular condition number. Our results underline the high computational complexity of computing tensor rank decompositions. We discuss consequences of our results for algorithm design and for testing algorithms that compute the CPD. Finally, we supply numerical experiments
    • …
    corecore