2,716 research outputs found
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
Tensor network states in time-bin quantum optics
The current shift in the quantum optics community towards large-size
experiments -- with many modes and photons -- necessitates new classical
simulation techniques that go beyond the usual phase space formulation of
quantum mechanics. To address this pressing demand we formulate linear quantum
optics in the language of tensor network states. As a toy model, we extensively
analyze the quantum and classical correlations of time-bin interference in a
single fiber loop. We then generalize our results to more complex time-bin
quantum setups and identify different classes of architectures for
high-complexity and low-overhead boson sampling experiments
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Scalable Tensor Factorizations for Incomplete Data
The problem of incomplete data - i.e., data with missing or unknown values -
in multi-way arrays is ubiquitous in biomedical signal processing, network
traffic analysis, bibliometrics, social network analysis, chemometrics,
computer vision, communication networks, etc. We consider the problem of how to
factorize data sets with missing values with the goal of capturing the
underlying latent structure of the data and possibly reconstructing missing
values (i.e., tensor completion). We focus on one of the most well-known tensor
factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP). In
the presence of missing data, CP can be formulated as a weighted least squares
problem that models only the known entries. We develop an algorithm called
CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization
approach to solve the weighted least squares problem. Based on extensive
numerical experiments, our algorithm is shown to successfully factorize tensors
with noise and up to 99% missing data. A unique aspect of our approach is that
it scales to sparse large-scale data, e.g., 1000 x 1000 x 1000 with five
million known entries (0.5% dense). We further demonstrate the usefulness of
CP-WOPT on two real-world applications: a novel EEG (electroencephalogram)
application where missing data is frequently encountered due to disconnections
of electrodes and the problem of modeling computer network traffic where data
may be absent due to the expense of the data collection process
Measuring stochastic gravitational-wave energy beyond general relativity
Gravity theories beyond general relativity (GR) can change the properties of
gravitational waves: their polarizations, dispersion, speed, and, importantly,
energy content are all heavily theory- dependent. All these corrections can
potentially be probed by measuring the stochastic gravitational- wave
background. However, most existing treatments of this background beyond GR
overlook modifications to the energy carried by gravitational waves, or rely on
GR assumptions that are invalid in other theories. This may lead to
mistranslation between the observable cross-correlation of detector outputs and
gravitational-wave energy density, and thus to errors when deriving
observational constraints on theories. In this article, we lay out a generic
formalism for stochastic gravitational- wave searches, applicable to a large
family of theories beyond GR. We explicitly state the (often tacit) assumptions
that go into these searches, evaluating their generic applicability, or lack
thereof. Examples of problematic assumptions are: statistical independence of
linear polarization amplitudes; which polarizations satisfy equipartition; and
which polarizations have well-defined phase velocities. We also show how to
correctly infer the value of the stochastic energy density in the context of
any given theory. We demonstrate with specific theories in which some of the
traditional assumptions break down: Chern-Simons gravity, scalar-tensor theory,
and Fierz-Pauli massive gravity. In each theory, we show how to properly
include the beyond-GR corrections, and how to interpret observational results.Comment: 18 pages (plus appendices), 1 figur
Exploring multimodal data fusion through joint decompositions with flexible couplings
A Bayesian framework is proposed to define flexible coupling models for joint
tensor decompositions of multiple data sets. Under this framework, a natural
formulation of the data fusion problem is to cast it in terms of a joint
maximum a posteriori (MAP) estimator. Data driven scenarios of joint posterior
distributions are provided, including general Gaussian priors and non Gaussian
coupling priors. We present and discuss implementation issues of algorithms
used to obtain the joint MAP estimator. We also show how this framework can be
adapted to tackle the problem of joint decompositions of large datasets. In the
case of a conditional Gaussian coupling with a linear transformation, we give
theoretical bounds on the data fusion performance using the Bayesian Cramer-Rao
bound. Simulations are reported for hybrid coupling models ranging from simple
additive Gaussian models, to Gamma-type models with positive variables and to
the coupling of data sets which are inherently of different size due to
different resolution of the measurement devices.Comment: 15 pages, 7 figures, revised versio
- …