2 research outputs found

    Distributed Non-Negative Tensor Train Decomposition

    Full text link
    The era of exascale computing opens new venues for innovations and discoveries in many scientific, engineering, and commercial fields. However, with the exaflops also come the extra-large high-dimensional data generated by high-performance computing. High-dimensional data is presented as multidimensional arrays, aka tensors. The presence of latent (not directly observable) structures in the tensor allows a unique representation and compression of the data by classical tensor factorization techniques. However, the classical tensor methods are not always stable or they can be exponential in their memory requirements, which makes them not suitable for high-dimensional tensors. Tensor train (TT) is a state-of-the-art tensor network introduced for factorization of high-dimensional tensors. TT transforms the initial high-dimensional tensor in a network of three-dimensional tensors that requires only a linear storage. Many real-world data, such as, density, temperature, population, probability, etc., are non-negative and for an easy interpretation, the algorithms preserving non-negativity are preferred. Here, we introduce a distributed non-negative tensor-train and demonstrate its scalability and the compression on synthetic and real-world big datasets.Comment: Accepted to IEEE-HPEC 202

    Sparse Nonnegative Tensor Factorization and Completion with Noisy Observations

    Full text link
    In this paper, we study the sparse nonnegative tensor factorization and completion problem from partial and noisy observations for third-order tensors. Because of sparsity and nonnegativity, the underling tensor is decomposed into the tensor-tensor product of one sparse nonnegative tensor and one nonnegative tensor. We propose to minimize the sum of the maximum likelihood estimate for the observations with nonnegativity constraints and the tensor â„“0\ell_0 norm for the sparse factor. We show that the error bounds of the estimator of the proposed model can be established under general noise observations. The detailed error bounds under specific noise distributions including additive Gaussian noise, additive Laplace noise, and Poisson observations can be derived. Moreover, the minimax lower bounds are shown to be matched with the established upper bounds up to a logarithmic factor of the sizes of the underlying tensor. These theoretical results for tensors are better than those obtained for matrices, and this illustrates the advantage of the use of nonnegative sparse tensor models for completion and denoising. Numerical experiments are provided to validate the superiority of the proposed tensor-based method compared with the matrix-based approach
    corecore