3,312 research outputs found
Report on "Geometry and representation theory of tensors for computer science, statistics and other areas."
This is a technical report on the proceedings of the workshop held July 21 to
July 25, 2008 at the American Institute of Mathematics, Palo Alto, California,
organized by Joseph Landsberg, Lek-Heng Lim, Jason Morton, and Jerzy Weyman. We
include a list of open problems coming from applications in 4 different areas:
signal processing, the Mulmuley-Sohoni approach to P vs. NP, matchgates and
holographic algorithms, and entanglement and quantum information theory. We
emphasize the interactions between geometry and representation theory and these
applied areas
The condition number of join decompositions
The join set of a finite collection of smooth embedded submanifolds of a
mutual vector space is defined as their Minkowski sum. Join decompositions
generalize some ubiquitous decompositions in multilinear algebra, namely tensor
rank, Waring, partially symmetric rank and block term decompositions. This
paper examines the numerical sensitivity of join decompositions to
perturbations; specifically, we consider the condition number for general join
decompositions. It is characterized as a distance to a set of ill-posed points
in a supplementary product of Grassmannians. We prove that this condition
number can be computed efficiently as the smallest singular value of an
auxiliary matrix. For some special join sets, we characterized the behavior of
sequences in the join set converging to the latter's boundary points. Finally,
we specialize our discussion to the tensor rank and Waring decompositions and
provide several numerical experiments confirming the key results
Tensor decomposition and homotopy continuation
A computationally challenging classical elimination theory problem is to
compute polynomials which vanish on the set of tensors of a given rank. By
moving away from computing polynomials via elimination theory to computing
pseudowitness sets via numerical elimination theory, we develop computational
methods for computing ranks and border ranks of tensors along with
decompositions. More generally, we present our approach using joins of any
collection of irreducible and nondegenerate projective varieties
defined over . After computing
ranks over , we also explore computing real ranks. Various examples
are included to demonstrate this numerical algebraic geometric approach.Comment: We have added two examples: A Coppersmith-Winograd tensor, Matrix
multiplication with zeros. (26 pages, 1 figure
A literature survey of low-rank tensor approximation techniques
During the last years, low-rank tensor approximation has been established as
a new tool in scientific computing to address large-scale linear and
multilinear algebra problems, which would be intractable by classical
techniques. This survey attempts to give a literature overview of current
developments in this area, with an emphasis on function-related tensors
Symmetric tensor decomposition
We present an algorithm for decomposing a symmetric tensor, of dimension n
and order d as a sum of rank-1 symmetric tensors, extending the algorithm of
Sylvester devised in 1886 for binary forms. We recall the correspondence
between the decomposition of a homogeneous polynomial in n variables of total
degree d as a sum of powers of linear forms (Waring's problem), incidence
properties on secant varieties of the Veronese Variety and the representation
of linear forms as a linear combination of evaluations at distinct points. Then
we reformulate Sylvester's approach from the dual point of view. Exploiting
this duality, we propose necessary and sufficient conditions for the existence
of such a decomposition of a given rank, using the properties of Hankel (and
quasi-Hankel) matrices, derived from multivariate polynomials and normal form
computations. This leads to the resolution of polynomial equations of small
degree in non-generic cases. We propose a new algorithm for symmetric tensor
decomposition, based on this characterization and on linear algebra
computations with these Hankel matrices. The impact of this contribution is
two-fold. First it permits an efficient computation of the decomposition of any
tensor of sub-generic rank, as opposed to widely used iterative algorithms with
unproved global convergence (e.g. Alternate Least Squares or gradient
descents). Second, it gives tools for understanding uniqueness conditions, and
for detecting the rank
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Eigenvectors of tensors and algorithms for Waring decomposition
A Waring decomposition of a (homogeneous) polynomial f is a minimal sum of
powers of linear forms expressing f. Under certain conditions, such a
decomposition is unique. We discuss some algorithms to compute the Waring
decomposition, which are linked to the equation of certain secant varieties and
to eigenvectors of tensors. In particular we explicitly decompose a general
cubic polynomial in three variables as the sum of five cubes (Sylvester
Pentahedral Theorem).Comment: 32 pages; three Macaulay2 files as ancillary files. Revised with
referee's suggestions. Accepted JS
- …