22 research outputs found

    Overview of Constrained PARAFAC Models

    Get PDF
    In this paper, we present an overview of constrained PARAFAC models where the constraints model linear dependencies among columns of the factor matrices of the tensor decomposition, or alternatively, the pattern of interactions between different modes of the tensor which are captured by the equivalent core tensor. Some tensor prerequisites with a particular emphasis on mode combination using Kronecker products of canonical vectors that makes easier matricization operations, are first introduced. This Kronecker product based approach is also formulated in terms of the index notation, which provides an original and concise formalism for both matricizing tensors and writing tensor models. Then, after a brief reminder of PARAFAC and Tucker models, two families of constrained tensor models, the co-called PARALIND/CONFAC and PARATUCK models, are described in a unified framework, for NthN^{th} order tensors. New tensor models, called nested Tucker models and block PARALIND/CONFAC models, are also introduced. A link between PARATUCK models and constrained PARAFAC models is then established. Finally, new uniqueness properties of PARATUCK models are deduced from sufficient conditions for essential uniqueness of their associated constrained PARAFAC models

    Statistical efficiency of structured cpd estimation applied to Wiener-Hammerstein modeling

    Get PDF
    Accepted for publication in the Proceedings of the European Signal Processing Conference (EUSIPCO) 2015.International audienceThe computation of a structured canonical polyadic decomposition (CPD) is useful to address several important modeling problems in real-world applications. In this paper, we consider the identification of a nonlinear system by means of a Wiener-Hammerstein model, assuming a high-order Volterra kernel of that system has been previously estimated. Such a kernel, viewed as a tensor, admits a CPD with banded circulant factors which comprise the model parameters. To estimate them, we formulate specialized estimators based on recently proposed algorithms for the computation of structured CPDs. Then, considering the presence of additive white Gaussian noise, we derive a closed-form expression for the Cramer-Rao bound (CRB) associated with this estimation problem. Finally, we assess the statistical performance of the proposed estimators via Monte Carlo simulations, by comparing their mean-square error with the CRB

    rTensor: An R Package for Multidimensional Array (Tensor) Unfolding, Multiplication, and Decomposition

    Get PDF
    rTensor is an R package designed to provide a common set of operations and decompositions for multidimensional arrays (tensors). We provide an S4 class that wraps around the base 'array' class and overloads familiar operations to users of 'array', and we provide additional functionality for tensor operations that are becoming more relevant in recent literature. We also provide a general unfolding operation, for which the k-mode unfolding and the matrix vectorization are special cases of. Finally, package rTensor implements common tensor decompositions such as canonical polyadic decomposition, Tucker decomposition, multilinear principal component analysis, t-singular value decomposition, as well as related matrix-based algorithms such as generalized low rank approximation of matrices and popular value decomposition

    Computing Large-Scale Matrix and Tensor Decomposition with Structured Factors: A Unified Nonconvex Optimization Perspective

    Full text link
    The proposed article aims at offering a comprehensive tutorial for the computational aspects of structured matrix and tensor factorization. Unlike existing tutorials that mainly focus on {\it algorithmic procedures} for a small set of problems, e.g., nonnegativity or sparsity-constrained factorization, we take a {\it top-down} approach: we start with general optimization theory (e.g., inexact and accelerated block coordinate descent, stochastic optimization, and Gauss-Newton methods) that covers a wide range of factorization problems with diverse constraints and regularization terms of engineering interest. Then, we go `under the hood' to showcase specific algorithm design under these introduced principles. We pay a particular attention to recent algorithmic developments in structured tensor and matrix factorization (e.g., random sketching and adaptive step size based stochastic optimization and structure-exploiting second-order algorithms), which are the state of the art---yet much less touched upon in the literature compared to {\it block coordinate descent} (BCD)-based methods. We expect that the article to have an educational values in the field of structured factorization and hope to stimulate more research in this important and exciting direction.Comment: Final Version; to appear in IEEE Signal Processing Magazine; title revised to comply with the journal's rul

    Multi-dimensional data analytics and deep learning via tensor networks

    Get PDF
    With the booming of big data and multi-sensor technology, multi-dimensional data, known as tensors, has demonstrated promising capability in capturing multidimensional correlation via efficiently extracting the latent structures, and drawn considerable attention in multiple disciplines such as image processing, recommender system, data analytics, etc. In addition to the multi-dimensional nature of real data, artificially designed tensors, referred as layers in deep neural networks, have also been intensively investigated and achieved the state-of-the-art performance in imaging processing, speech processing, and natural language understanding. However, algorithms related with multi-dimensional data are unfortunately expensive in computation and storage, thus limiting its application when the computational resources are limited. Although tensor factorization has been proposed to reduce the dimensionality and alleviate the computational cost, the trade-off among computation, storage, and performance has not been well studied. To this end, we first investigate an efficient dimensionality reduction method using a novel Tensor Train (TT) factorization. In particular, we propose a Tensor Train Principal Component Analysis (TT-PCA) and a Tensor Train Neighborhood Preserving Embedding (TT-NPE) to project data onto a Tensor Train Subspace (TTS) and effectively extract the discriminative features from the data. Mathematical analysis and simulation demonstrate TT-PCA and TT-NPE achieve better trade-off among computation, storage, and performance than the bench-mark tensor-based dimensionality reduction approaches. We then extend the TT factorization into general Tensor Ring (TR) factorization and propose a tensor ring completion algorithm, which can utilize 10% randomly observed pixels to recover the gunshot video at an error rate of only 6.25%. Inspired by the novel trade-off between model complexity and data representation, we introduce a Tensor Ring Nets (TRN) to compress the deep neural networks significantly. Using the benchmark 28-layer WideResNet architectures, TRN is able to compress the neural network by 243Ă— with only 2.3% degradation in Cifar10 image classification
    corecore