68,161 research outputs found
Tensors: a Brief Introduction
International audienceTensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor decomposition plays a central role in identification of underdetermined mixtures. Despite some similarities, CP and Singular value Decomposition (SVD) are quite different. More generally, tensors and matrices enjoy different properties, as pointed out in this brief survey
Factorization without Factorization: Complete Recipe
The Tomasi-Kanade factorization for reconstructing the 3-D shape of the feature points tracked through a video stream is widely regarded as based on factorization of a matrix by SVD (singular value decomposition). This paper points out that the core principle is the affine camera approximation to the imaging geometry and that SVD is merely one means of numerical computation. We first describe the geometric structure of the problem and then give a complete programming scheme for 3-D reconstruction
The atomic orbitals of the topological atom
The effective atomic orbitals have been realized in the framework of Bader’s atoms in molecules theory for a general wavefunction. This formalism can be used to retrieve from any type of calculation a
proper set of orthonormalized numerical atomic orbitals, with occupation numbers that sum up to the
respective Quantum Theory of Atoms in Molecules (QTAIM) atomic populations. Experience shows
that only a limited number of effective atomic orbitals exhibit significant occupation numbers. These
correspond to atomic hybrids that closely resemble the core and valence shells of the atom. The
occupation numbers of the remaining effective orbitals are almost negligible, except for atoms with
hypervalent character. In addition, the molecular orbitals of a calculation can be exactly expressed
as a linear combination of this orthonormalized set of numerical atomic orbitals, and the Mulliken
population analysis carried out on this basis set exactly reproduces the original QTAIM atomic populations of the atoms. Approximate expansion of the molecular orbitals over a much reduced set of
orthogonal atomic basis functions can also be accomplished to a very good accuracy with a singular
value decomposition procedure
Very Large-Scale Singular Value Decomposition Using Tensor Train Networks
We propose new algorithms for singular value decomposition (SVD) of very
large-scale matrices based on a low-rank tensor approximation technique called
the tensor train (TT) format. The proposed algorithms can compute several
dominant singular values and corresponding singular vectors for large-scale
structured matrices given in a TT format. The computational complexity of the
proposed methods scales logarithmically with the matrix size under the
assumption that both the matrix and the singular vectors admit low-rank TT
decompositions. The proposed methods, which are called the alternating least
squares for SVD (ALS-SVD) and modified alternating least squares for SVD
(MALS-SVD), compute the left and right singular vectors approximately through
block TT decompositions. The very large-scale optimization problem is reduced
to sequential small-scale optimization problems, and each core tensor of the
block TT decompositions can be updated by applying any standard optimization
methods. The optimal ranks of the block TT decompositions are determined
adaptively during iteration process, so that we can achieve high approximation
accuracy. Extensive numerical simulations are conducted for several types of
TT-structured matrices such as Hilbert matrix, Toeplitz matrix, random matrix
with prescribed singular values, and tridiagonal matrix. The simulation results
demonstrate the effectiveness of the proposed methods compared with standard
SVD algorithms and TT-based algorithms developed for symmetric eigenvalue
decomposition
A Constructive Algorithm for Decomposing a Tensor into a Finite Sum of Orthonormal Rank-1 Terms
We propose a constructive algorithm that decomposes an arbitrary real tensor
into a finite sum of orthonormal rank-1 outer products. The algorithm, named
TTr1SVD, works by converting the tensor into a tensor-train rank-1 (TTr1)
series via the singular value decomposition (SVD). TTr1SVD naturally
generalizes the SVD to the tensor regime with properties such as uniqueness for
a fixed order of indices, orthogonal rank-1 outer product terms, and easy
truncation error quantification. Using an outer product column table it also
allows, for the first time, a complete characterization of all tensors
orthogonal with the original tensor. Incidentally, this leads to a strikingly
simple constructive proof showing that the maximum rank of a real tensor over the real field is 3. We also derive a conversion of the
TTr1 decomposition into a Tucker decomposition with a sparse core tensor.
Numerical examples illustrate each of the favorable properties of the TTr1
decomposition.Comment: Added subsection on orthogonal complement tensors. Added constructive
proof of maximal CP-rank of a 2x2x2 tensor. Added perturbation of singular
values result. Added conversion of the TTr1 decomposition to the Tucker
decomposition. Added example that demonstrates how the rank behaves when
subtracting rank-1 terms. Added example with exponential decaying singular
value
Tensor-based dynamic mode decomposition
Dynamic mode decomposition (DMD) is a recently developed tool for the
analysis of the behavior of complex dynamical systems. In this paper, we will
propose an extension of DMD that exploits low-rank tensor decompositions of
potentially high-dimensional data sets to compute the corresponding DMD modes
and eigenvalues. The goal is to reduce the computational complexity and also
the amount of memory required to store the data in order to mitigate the curse
of dimensionality. The efficiency of these tensor-based methods will be
illustrated with the aid of several different fluid dynamics problems such as
the von K\'arm\'an vortex street and the simulation of two merging vortices
- …