689 research outputs found

    A Riemannian Trust Region Method for the Canonical Tensor Rank Approximation Problem

    Full text link
    The canonical tensor rank approximation problem (TAP) consists of approximating a real-valued tensor by one of low canonical rank, which is a challenging non-linear, non-convex, constrained optimization problem, where the constraint set forms a non-smooth semi-algebraic set. We introduce a Riemannian Gauss-Newton method with trust region for solving small-scale, dense TAPs. The novelty of our approach is threefold. First, we parametrize the constraint set as the Cartesian product of Segre manifolds, hereby formulating the TAP as a Riemannian optimization problem, and we argue why this parametrization is among the theoretically best possible. Second, an original ST-HOSVD-based retraction operator is proposed. Third, we introduce a hot restart mechanism that efficiently detects when the optimization process is tending to an ill-conditioned tensor rank decomposition and which often yields a quick escape path from such spurious decompositions. Numerical experiments show improvements of up to three orders of magnitude in terms of the expected time to compute a successful solution over existing state-of-the-art methods

    Convergence analysis of Riemannian Gauss-Newton methods and its connection with the geometric condition number

    Full text link
    We obtain estimates of the multiplicative constants appearing in local convergence results of the Riemannian Gauss-Newton method for least squares problems on manifolds and relate them to the geometric condition number of [P. B\"urgisser and F. Cucker, Condition: The Geometry of Numerical Algorithms, 2013]

    The condition number of join decompositions

    Full text link
    The join set of a finite collection of smooth embedded submanifolds of a mutual vector space is defined as their Minkowski sum. Join decompositions generalize some ubiquitous decompositions in multilinear algebra, namely tensor rank, Waring, partially symmetric rank and block term decompositions. This paper examines the numerical sensitivity of join decompositions to perturbations; specifically, we consider the condition number for general join decompositions. It is characterized as a distance to a set of ill-posed points in a supplementary product of Grassmannians. We prove that this condition number can be computed efficiently as the smallest singular value of an auxiliary matrix. For some special join sets, we characterized the behavior of sequences in the join set converging to the latter's boundary points. Finally, we specialize our discussion to the tensor rank and Waring decompositions and provide several numerical experiments confirming the key results

    On the average condition number of tensor rank decompositions

    Full text link
    We compute the expected value of powers of the geometric condition number of random tensor rank decompositions. It is shown in particular that the expected value of the condition number of n1×n2×2n_1\times n_2 \times 2 tensors with a random rank-rr decomposition, given by factor matrices with independent and identically distributed standard normal entries, is infinite. This entails that it is expected and probable that such a rank-rr decomposition is sensitive to perturbations of the tensor. Moreover, it provides concrete further evidence that tensor decomposition can be a challenging problem, also from the numerical point of view. On the other hand, we provide strong theoretical and empirical evidence that tensors of size n1 × n2 × n3n_1~\times~n_2~\times~n_3 with all n1,n2,n3≥3n_1,n_2,n_3 \ge 3 have a finite average condition number. This suggests there exists a gap in the expected sensitivity of tensors between those of format n1×n2×2n_1\times n_2 \times 2 and other order-3 tensors. For establishing these results, we show that a natural weighted distance from a tensor rank decomposition to the locus of ill-posed decompositions with an infinite geometric condition number is bounded from below by the inverse of this condition number. That is, we prove one inequality towards a so-called condition number theorem for the tensor rank decomposition

    The average condition number of most tensor rank decomposition problems is infinite

    Full text link
    The tensor rank decomposition, or canonical polyadic decomposition, is the decomposition of a tensor into a sum of rank-1 tensors. The condition number of the tensor rank decomposition measures the sensitivity of the rank-1 summands with respect to structured perturbations. Those are perturbations preserving the rank of the tensor that is decomposed. On the other hand, the angular condition number measures the perturbations of the rank-1 summands up to scaling. We show for random rank-2 tensors with Gaussian density that the expected value of the condition number is infinite. Under some mild additional assumption, we show that the same is true for most higher ranks r≥3r\geq 3 as well. In fact, as the dimensions of the tensor tend to infinity, asymptotically all ranks are covered by our analysis. On the contrary, we show that rank-2 Gaussian tensors have finite expected angular condition number. Our results underline the high computational complexity of computing tensor rank decompositions. We discuss consequences of our results for algorithm design and for testing algorithms that compute the CPD. Finally, we supply numerical experiments
    • …
    corecore