675 research outputs found

    On the average condition number of tensor rank decompositions

    Full text link
    We compute the expected value of powers of the geometric condition number of random tensor rank decompositions. It is shown in particular that the expected value of the condition number of n1×n2×2n_1\times n_2 \times 2 tensors with a random rank-rr decomposition, given by factor matrices with independent and identically distributed standard normal entries, is infinite. This entails that it is expected and probable that such a rank-rr decomposition is sensitive to perturbations of the tensor. Moreover, it provides concrete further evidence that tensor decomposition can be a challenging problem, also from the numerical point of view. On the other hand, we provide strong theoretical and empirical evidence that tensors of size n1 × n2 × n3n_1~\times~n_2~\times~n_3 with all n1,n2,n33n_1,n_2,n_3 \ge 3 have a finite average condition number. This suggests there exists a gap in the expected sensitivity of tensors between those of format n1×n2×2n_1\times n_2 \times 2 and other order-3 tensors. For establishing these results, we show that a natural weighted distance from a tensor rank decomposition to the locus of ill-posed decompositions with an infinite geometric condition number is bounded from below by the inverse of this condition number. That is, we prove one inequality towards a so-called condition number theorem for the tensor rank decomposition

    On the minimal ranks of matrix pencils and the existence of a best approximate block-term tensor decomposition

    Full text link
    Under the action of the general linear group with tensor structure, the ranks of matrices AA and BB forming an m×nm \times n pencil A+λBA + \lambda B can change, but in a restricted manner. Specifically, with every pencil one can associate a pair of minimal ranks, which is unique up to a permutation. This notion can be defined for matrix pencils and, more generally, also for matrix polynomials of arbitrary degree. In this paper, we provide a formal definition of the minimal ranks, discuss its properties and the natural hierarchy it induces in a pencil space. Then, we show how the minimal ranks of a pencil can be determined from its Kronecker canonical form. For illustration, we classify the orbits according to their minimal ranks (under the action of the general linear group) in the case of real pencils with m,n4m, n \le 4. Subsequently, we show that real regular 2k×2k2k \times 2k pencils having only complex-valued eigenvalues, which form an open positive-volume set, do not admit a best approximation (in the norm topology) on the set of real pencils whose minimal ranks are bounded by 2k12k-1. Our results can be interpreted from a tensor viewpoint, where the minimal ranks of a degree-(d1)(d-1) matrix polynomial characterize the minimal ranks of matrices constituting a block-term decomposition of an m×n×dm \times n \times d tensor into a sum of matrix-vector tensor products.Comment: This work was supported by the European Research Council under the European Programme FP7/2007-2013, Grant AdG-2013-320594 "DECODA.

    On convergence of the maximum block improvement method

    Get PDF
    Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updates the block of variables corresponding to the maximally improving block at each iteration, which is arguably a most natural and simple process to tackle block-structured problems with great potentials for engineering applications. In this paper we establish global and local linear convergence results for this method. The global convergence is established under the Lojasiewicz inequality assumption, while the local analysis invokes second-order assumptions. We study in particular the tensor optimization model with spherical constraints. Conditions for linear convergence of the famous power method for computing the maximum eigenvalue of a matrix follow in this framework as a special case. The condition is interpreted in various other forms for the rank-one tensor optimization model under spherical constraints. Numerical experiments are shown to support the convergence property of the MBI method

    Fast truncation of mode ranks for bilinear tensor operations

    Full text link
    We propose a fast algorithm for mode rank truncation of the result of a bilinear operation on 3-tensors given in the Tucker or canonical form. If the arguments and the result have mode sizes n and mode ranks r, the computation costs O(nr3+r4)O(nr^3 + r^4). The algorithm is based on the cross approximation of Gram matrices, and the accuracy of the resulted Tucker approximation is limited by square root of machine precision.Comment: 9 pages, 2 tables. Submitted to Numerical Linear Algebra and Applications, special edition for ICSMT conference, Hong Kong, January 201
    corecore