3 research outputs found

    Low Complexity Damped Gauss-Newton Algorithms for CANDECOMP/PARAFAC

    Full text link
    The damped Gauss-Newton (dGN) algorithm for CANDECOMP/PARAFAC (CP) decomposition can handle the challenges of collinearity of factors and different magnitudes of factors; nevertheless, for factorization of an NN-D tensor of size I1×INI_1\times I_N with rank RR, the algorithm is computationally demanding due to construction of large approximate Hessian of size (RT×RT)(RT \times RT) and its inversion where T=∑nInT = \sum_n I_n. In this paper, we propose a fast implementation of the dGN algorithm which is based on novel expressions of the inverse approximate Hessian in block form. The new implementation has lower computational complexity, besides computation of the gradient (this part is common to both methods), requiring the inversion of a matrix of size NR2×NR2NR^2\times NR^2, which is much smaller than the whole approximate Hessian, if T≫NRT \gg NR. In addition, the implementation has lower memory requirements, because neither the Hessian nor its inverse never need to be stored in their entirety. A variant of the algorithm working with complex valued data is proposed as well. Complexity and performance of the proposed algorithm is compared with those of dGN and ALS with line search on examples of difficult benchmark tensors

    Stability of CANDECOMP-PARAFAC tensor decomposition

    No full text
    corecore