1,025 research outputs found
Multilinear Time Invariant System Theory
In biological and engineering systems, structure, function and dynamics are
highly coupled. Such interactions can be naturally and compactly captured via
tensor based state space dynamic representations. However, such representations
are not amenable to the standard system and controls framework which requires
the state to be in the form of a vector. In order to address this limitation,
recently a new class of multiway dynamical systems has been introduced in which
the states, inputs and outputs are tensors. We propose a new form of
multilinear time invariant (MLTI) systems based on the Einstein product and
even-order paired tensors. We extend classical linear time invariant (LTI)
system notions including stability, reachability and observability for the new
MLTI system representation by leveraging recent advances in tensor algebra.Comment: 8 pages, SIAM Conference on Control and its Applications 2019,
accepted to appea
A literature survey of low-rank tensor approximation techniques
During the last years, low-rank tensor approximation has been established as
a new tool in scientific computing to address large-scale linear and
multilinear algebra problems, which would be intractable by classical
techniques. This survey attempts to give a literature overview of current
developments in this area, with an emphasis on function-related tensors
Higher Spin Fields in Siegel Space, Currents and Theta Functions
Dynamics of four-dimensional massless fields of all spins is formulated in
the Siegel space of complex symmetric matrices. It is shown that
the unfolded equations of free massless fields, that have a form of
multidimensional Schrodinger equations, naturally distinguish between positive-
and negative-frequency solutions of relativistic field equations, i.e.
particles and antiparticles. Multidimensional Riemann theta functions are shown
to solve massless field equations in the Siegel space. We establish the
correspondence between conserved higher-spin currents in four-dimensional
Minkowski space and those in the ten-dimensional matrix space. It is shown that
global symmetry parameters of the current in the matrix space should be
singular to reproduce a nonzero current in Minkowski space. The \D-function
integral evolution formulae for 4d massless fields in the Fock-Siegel space are
obtained. The generalization of the proposed scheme to higher dimensions and
systems of higher ranks is considered.Comment: LaTeX, 38 pages, v.3: clarifications, acknowledgements and references
added, typos corrected, v.4: more comments and references added, typos
corrected, the version to appear in JHE
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
On a Biparameter Maximal Multilinear Operator
It is well-known that estimates for maximal operators and questions of
pointwise convergence are strongly connected. In recent years, convergence
properties of so-called `non-conventional ergodic averages' have been studied
by a number of authors, including Assani, Austin, Host, Kra, Tao, and so on. In
particular, much is known regarding convergence in of these averages, but
little is known about pointwise convergence. In this spirit, we consider the
pointwise convergence of a particular ergodic average and study the
corresponding maximal trilinear operator (over , thanks to a
transference principle). Lacey and Demeter, Tao, and Thiele have studied
maximal multilinear operators previously; however, the maximal operator we
develop has a novel bi-parameter structure which has not been previously
encountered and cannot be estimated using their techniques. We will carve this
bi-parameter maximal multilinear operator using a certain Taylor series and
produce non-trivial H\"{o}lder-type estimates for one of the two "main" terms
by treating it as a singular integrals whose symbol's singular set is similar
to that of the Biest operator studied by Muscalu, Tao, and Thiele.Comment: 32 pages, 1 figur
- …