6,264 research outputs found
Scalable Tensor Factorizations for Incomplete Data
The problem of incomplete data - i.e., data with missing or unknown values -
in multi-way arrays is ubiquitous in biomedical signal processing, network
traffic analysis, bibliometrics, social network analysis, chemometrics,
computer vision, communication networks, etc. We consider the problem of how to
factorize data sets with missing values with the goal of capturing the
underlying latent structure of the data and possibly reconstructing missing
values (i.e., tensor completion). We focus on one of the most well-known tensor
factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP). In
the presence of missing data, CP can be formulated as a weighted least squares
problem that models only the known entries. We develop an algorithm called
CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization
approach to solve the weighted least squares problem. Based on extensive
numerical experiments, our algorithm is shown to successfully factorize tensors
with noise and up to 99% missing data. A unique aspect of our approach is that
it scales to sparse large-scale data, e.g., 1000 x 1000 x 1000 with five
million known entries (0.5% dense). We further demonstrate the usefulness of
CP-WOPT on two real-world applications: a novel EEG (electroencephalogram)
application where missing data is frequently encountered due to disconnections
of electrodes and the problem of modeling computer network traffic where data
may be absent due to the expense of the data collection process
A Constructive Algorithm for Decomposing a Tensor into a Finite Sum of Orthonormal Rank-1 Terms
We propose a constructive algorithm that decomposes an arbitrary real tensor
into a finite sum of orthonormal rank-1 outer products. The algorithm, named
TTr1SVD, works by converting the tensor into a tensor-train rank-1 (TTr1)
series via the singular value decomposition (SVD). TTr1SVD naturally
generalizes the SVD to the tensor regime with properties such as uniqueness for
a fixed order of indices, orthogonal rank-1 outer product terms, and easy
truncation error quantification. Using an outer product column table it also
allows, for the first time, a complete characterization of all tensors
orthogonal with the original tensor. Incidentally, this leads to a strikingly
simple constructive proof showing that the maximum rank of a real tensor over the real field is 3. We also derive a conversion of the
TTr1 decomposition into a Tucker decomposition with a sparse core tensor.
Numerical examples illustrate each of the favorable properties of the TTr1
decomposition.Comment: Added subsection on orthogonal complement tensors. Added constructive
proof of maximal CP-rank of a 2x2x2 tensor. Added perturbation of singular
values result. Added conversion of the TTr1 decomposition to the Tucker
decomposition. Added example that demonstrates how the rank behaves when
subtracting rank-1 terms. Added example with exponential decaying singular
value
Tensor and Matrix Inversions with Applications
Higher order tensor inversion is possible for even order. We have shown that
a tensor group endowed with the Einstein (contracted) product is isomorphic to
the general linear group of degree . With the isomorphic group structures,
we derived new tensor decompositions which we have shown to be related to the
well-known canonical polyadic decomposition and multilinear SVD. Moreover,
within this group structure framework, multilinear systems are derived,
specifically, for solving high dimensional PDEs and large discrete quantum
models. We also address multilinear systems which do not fit the framework in
the least-squares sense, that is, when the tensor has an odd number of modes or
when the tensor has distinct dimensions in each modes. With the notion of
tensor inversion, multilinear systems are solvable. Numerically we solve
multilinear systems using iterative techniques, namely biconjugate gradient and
Jacobi methods in tensor format
- …