1,100 research outputs found
Overview of Constrained PARAFAC Models
In this paper, we present an overview of constrained PARAFAC models where the
constraints model linear dependencies among columns of the factor matrices of
the tensor decomposition, or alternatively, the pattern of interactions between
different modes of the tensor which are captured by the equivalent core tensor.
Some tensor prerequisites with a particular emphasis on mode combination using
Kronecker products of canonical vectors that makes easier matricization
operations, are first introduced. This Kronecker product based approach is also
formulated in terms of the index notation, which provides an original and
concise formalism for both matricizing tensors and writing tensor models. Then,
after a brief reminder of PARAFAC and Tucker models, two families of
constrained tensor models, the co-called PARALIND/CONFAC and PARATUCK models,
are described in a unified framework, for order tensors. New tensor
models, called nested Tucker models and block PARALIND/CONFAC models, are also
introduced. A link between PARATUCK models and constrained PARAFAC models is
then established. Finally, new uniqueness properties of PARATUCK models are
deduced from sufficient conditions for essential uniqueness of their associated
constrained PARAFAC models
Blind Multilinear Identification
We discuss a technique that allows blind recovery of signals or blind
identification of mixtures in instances where such recovery or identification
were previously thought to be impossible: (i) closely located or highly
correlated sources in antenna array processing, (ii) highly correlated
spreading codes in CDMA radio communication, (iii) nearly dependent spectra in
fluorescent spectroscopy. This has important implications --- in the case of
antenna array processing, it allows for joint localization and extraction of
multiple sources from the measurement of a noisy mixture recorded on multiple
sensors in an entirely deterministic manner. In the case of CDMA, it allows the
possibility of having a number of users larger than the spreading gain. In the
case of fluorescent spectroscopy, it allows for detection of nearly identical
chemical constituents. The proposed technique involves the solution of a
bounded coherence low-rank multilinear approximation problem. We show that
bounded coherence allows us to establish existence and uniqueness of the
recovered solution. We will provide some statistical motivation for the
approximation problem and discuss greedy approximation bounds. To provide the
theoretical underpinnings for this technique, we develop a corresponding theory
of sparse separable decompositions of functions, including notions of rank and
nuclear norm that specialize to the usual ones for matrices and operators but
apply to also hypermatrices and tensors.Comment: 20 pages, to appear in IEEE Transactions on Information Theor
Stable, Robust and Super Fast Reconstruction of Tensors Using Multi-Way Projections
In the framework of multidimensional Compressed Sensing (CS), we introduce an
analytical reconstruction formula that allows one to recover an th-order
data tensor
from a reduced set of multi-way compressive measurements by exploiting its low
multilinear-rank structure. Moreover, we show that, an interesting property of
multi-way measurements allows us to build the reconstruction based on
compressive linear measurements taken only in two selected modes, independently
of the tensor order . In addition, it is proved that, in the matrix case and
in a particular case with rd-order tensors where the same 2D sensor operator
is applied to all mode-3 slices, the proposed reconstruction
is stable in the sense that the approximation
error is comparable to the one provided by the best low-multilinear-rank
approximation, where is a threshold parameter that controls the
approximation error. Through the analysis of the upper bound of the
approximation error we show that, in the 2D case, an optimal value for the
threshold parameter exists, which is confirmed by our
simulation results. On the other hand, our experiments on 3D datasets show that
very good reconstructions are obtained using , which means that this
parameter does not need to be tuned. Our extensive simulation results
demonstrate the stability and robustness of the method when it is applied to
real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity
based CS methods specialized for multidimensional signals is also included. A
very attractive characteristic of the proposed method is that it provides a
direct computation, i.e. it is non-iterative in contrast to all existing
sparsity based CS algorithms, thus providing super fast computations, even for
large datasets.Comment: Submitted to IEEE Transactions on Signal Processin
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
- …