376 research outputs found

    Dictionary-based Tensor Canonical Polyadic Decomposition

    Full text link
    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train

    On orthogonal tensors and best rank-one approximation ratio

    Full text link
    As is well known, the smallest possible ratio between the spectral norm and the Frobenius norm of an m×nm \times n matrix with m≤nm \le n is 1/m1/\sqrt{m} and is (up to scalar scaling) attained only by matrices having pairwise orthonormal rows. In the present paper, the smallest possible ratio between spectral and Frobenius norms of n1×⋯×ndn_1 \times \dots \times n_d tensors of order dd, also called the best rank-one approximation ratio in the literature, is investigated. The exact value is not known for most configurations of n1≤⋯≤ndn_1 \le \dots \le n_d. Using a natural definition of orthogonal tensors over the real field (resp., unitary tensors over the complex field), it is shown that the obvious lower bound 1/n1⋯nd−11/\sqrt{n_1 \cdots n_{d-1}} is attained if and only if a tensor is orthogonal (resp., unitary) up to scaling. Whether or not orthogonal or unitary tensors exist depends on the dimensions n1,…,ndn_1,\dots,n_d and the field. A connection between the (non)existence of real orthogonal tensors of order three and the classical Hurwitz problem on composition algebras can be established: existence of orthogonal tensors of size ℓ×m×n\ell \times m \times n is equivalent to the admissibility of the triple [ℓ,m,n][\ell,m,n] to the Hurwitz problem. Some implications for higher-order tensors are then given. For instance, real orthogonal n×⋯×nn \times \dots \times n tensors of order d≥3d \ge 3 do exist, but only when n=1,2,4,8n = 1,2,4,8. In the complex case, the situation is more drastic: unitary tensors of size ℓ×m×n\ell \times m \times n with ℓ≤m≤n\ell \le m \le n exist only when ℓm≤n\ell m \le n. Finally, some numerical illustrations for spectral norm computation are presented
    • …
    corecore