376 research outputs found
Dictionary-based Tensor Canonical Polyadic Decomposition
To ensure interpretability of extracted sources in tensor decomposition, we
introduce in this paper a dictionary-based tensor canonical polyadic
decomposition which enforces one factor to belong exactly to a known
dictionary. A new formulation of sparse coding is proposed which enables high
dimensional tensors dictionary-based canonical polyadic decomposition. The
benefits of using a dictionary in tensor decomposition models are explored both
in terms of parameter identifiability and estimation accuracy. Performances of
the proposed algorithms are evaluated on the decomposition of simulated data
and the unmixing of hyperspectral images
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
On orthogonal tensors and best rank-one approximation ratio
As is well known, the smallest possible ratio between the spectral norm and
the Frobenius norm of an matrix with is and
is (up to scalar scaling) attained only by matrices having pairwise orthonormal
rows. In the present paper, the smallest possible ratio between spectral and
Frobenius norms of tensors of order , also
called the best rank-one approximation ratio in the literature, is
investigated. The exact value is not known for most configurations of . Using a natural definition of orthogonal tensors over the real
field (resp., unitary tensors over the complex field), it is shown that the
obvious lower bound is attained if and only if a
tensor is orthogonal (resp., unitary) up to scaling. Whether or not orthogonal
or unitary tensors exist depends on the dimensions and the
field. A connection between the (non)existence of real orthogonal tensors of
order three and the classical Hurwitz problem on composition algebras can be
established: existence of orthogonal tensors of size
is equivalent to the admissibility of the triple to the Hurwitz
problem. Some implications for higher-order tensors are then given. For
instance, real orthogonal tensors of order
do exist, but only when . In the complex case, the situation is
more drastic: unitary tensors of size with exist only when . Finally, some numerical illustrations
for spectral norm computation are presented
- …