338 research outputs found
Tensor and Matrix Inversions with Applications
Higher order tensor inversion is possible for even order. We have shown that
a tensor group endowed with the Einstein (contracted) product is isomorphic to
the general linear group of degree . With the isomorphic group structures,
we derived new tensor decompositions which we have shown to be related to the
well-known canonical polyadic decomposition and multilinear SVD. Moreover,
within this group structure framework, multilinear systems are derived,
specifically, for solving high dimensional PDEs and large discrete quantum
models. We also address multilinear systems which do not fit the framework in
the least-squares sense, that is, when the tensor has an odd number of modes or
when the tensor has distinct dimensions in each modes. With the notion of
tensor inversion, multilinear systems are solvable. Numerically we solve
multilinear systems using iterative techniques, namely biconjugate gradient and
Jacobi methods in tensor format
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
Overview of Constrained PARAFAC Models
In this paper, we present an overview of constrained PARAFAC models where the
constraints model linear dependencies among columns of the factor matrices of
the tensor decomposition, or alternatively, the pattern of interactions between
different modes of the tensor which are captured by the equivalent core tensor.
Some tensor prerequisites with a particular emphasis on mode combination using
Kronecker products of canonical vectors that makes easier matricization
operations, are first introduced. This Kronecker product based approach is also
formulated in terms of the index notation, which provides an original and
concise formalism for both matricizing tensors and writing tensor models. Then,
after a brief reminder of PARAFAC and Tucker models, two families of
constrained tensor models, the co-called PARALIND/CONFAC and PARATUCK models,
are described in a unified framework, for order tensors. New tensor
models, called nested Tucker models and block PARALIND/CONFAC models, are also
introduced. A link between PARATUCK models and constrained PARAFAC models is
then established. Finally, new uniqueness properties of PARATUCK models are
deduced from sufficient conditions for essential uniqueness of their associated
constrained PARAFAC models
On Degeneration of Tensors and Algebras
An important building block in all current asymptotically fast algorithms for matrix multiplication are tensors with low border rank, that is, tensors whose border rank is equal or very close to their size. To find new asymptotically fast algorithms for matrix multiplication, it seems to be important to understand those tensors whose border rank is as small as possible, so called tensors of minimal border rank.
We investigate the connection between degenerations of associative algebras and degenerations of their structure tensors in the sense of Strassen. It allows us to describe an open subset of n*n*n tensors of minimal border rank in terms of smoothability of commutative algebras. We describe the smoothable algebra associated to the Coppersmith-Winograd tensor and prove a lower bound for the border rank of the tensor used in the "easy construction" of Coppersmith and Winograd
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
- …