23,625 research outputs found
Numerical Optimization for Symmetric Tensor Decomposition
We consider the problem of decomposing a real-valued symmetric tensor as the
sum of outer products of real-valued vectors. Algebraic methods exist for
computing complex-valued decompositions of symmetric tensors, but here we focus
on real-valued decompositions, both unconstrained and nonnegative, for problems
with low-rank structure. We discuss when solutions exist and how to formulate
the mathematical program. Numerical results show the properties of the proposed
formulations (including one that ignores symmetry) on a set of test problems
and illustrate that these straightforward formulations can be effective even
though the problem is nonconvex
Very Large-Scale Singular Value Decomposition Using Tensor Train Networks
We propose new algorithms for singular value decomposition (SVD) of very
large-scale matrices based on a low-rank tensor approximation technique called
the tensor train (TT) format. The proposed algorithms can compute several
dominant singular values and corresponding singular vectors for large-scale
structured matrices given in a TT format. The computational complexity of the
proposed methods scales logarithmically with the matrix size under the
assumption that both the matrix and the singular vectors admit low-rank TT
decompositions. The proposed methods, which are called the alternating least
squares for SVD (ALS-SVD) and modified alternating least squares for SVD
(MALS-SVD), compute the left and right singular vectors approximately through
block TT decompositions. The very large-scale optimization problem is reduced
to sequential small-scale optimization problems, and each core tensor of the
block TT decompositions can be updated by applying any standard optimization
methods. The optimal ranks of the block TT decompositions are determined
adaptively during iteration process, so that we can achieve high approximation
accuracy. Extensive numerical simulations are conducted for several types of
TT-structured matrices such as Hilbert matrix, Toeplitz matrix, random matrix
with prescribed singular values, and tridiagonal matrix. The simulation results
demonstrate the effectiveness of the proposed methods compared with standard
SVD algorithms and TT-based algorithms developed for symmetric eigenvalue
decomposition
The Tensor Networks Anthology: Simulation techniques for many-body quantum lattice systems
We present a compendium of numerical simulation techniques, based on tensor
network methods, aiming to address problems of many-body quantum mechanics on a
classical computer. The core setting of this anthology are lattice problems in
low spatial dimension at finite size, a physical scenario where tensor network
methods, both Density Matrix Renormalization Group and beyond, have long proven
to be winning strategies. Here we explore in detail the numerical frameworks
and methods employed to deal with low-dimension physical setups, from a
computational physics perspective. We focus on symmetries and closed-system
simulations in arbitrary boundary conditions, while discussing the numerical
data structures and linear algebra manipulation routines involved, which form
the core libraries of any tensor network code. At a higher level, we put the
spotlight on loop-free network geometries, discussing their advantages, and
presenting in detail algorithms to simulate low-energy equilibrium states.
Accompanied by discussions of data structures, numerical techniques and
performance, this anthology serves as a programmer's companion, as well as a
self-contained introduction and review of the basic and selected advanced
concepts in tensor networks, including examples of their applications.Comment: 115 pages, 56 figure
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
- …