11,306 research outputs found
Very Large-Scale Singular Value Decomposition Using Tensor Train Networks
We propose new algorithms for singular value decomposition (SVD) of very
large-scale matrices based on a low-rank tensor approximation technique called
the tensor train (TT) format. The proposed algorithms can compute several
dominant singular values and corresponding singular vectors for large-scale
structured matrices given in a TT format. The computational complexity of the
proposed methods scales logarithmically with the matrix size under the
assumption that both the matrix and the singular vectors admit low-rank TT
decompositions. The proposed methods, which are called the alternating least
squares for SVD (ALS-SVD) and modified alternating least squares for SVD
(MALS-SVD), compute the left and right singular vectors approximately through
block TT decompositions. The very large-scale optimization problem is reduced
to sequential small-scale optimization problems, and each core tensor of the
block TT decompositions can be updated by applying any standard optimization
methods. The optimal ranks of the block TT decompositions are determined
adaptively during iteration process, so that we can achieve high approximation
accuracy. Extensive numerical simulations are conducted for several types of
TT-structured matrices such as Hilbert matrix, Toeplitz matrix, random matrix
with prescribed singular values, and tridiagonal matrix. The simulation results
demonstrate the effectiveness of the proposed methods compared with standard
SVD algorithms and TT-based algorithms developed for symmetric eigenvalue
decomposition
Singular Value Decomposition of Operators on Reproducing Kernel Hilbert Spaces
Reproducing kernel Hilbert spaces (RKHSs) play an important role in many
statistics and machine learning applications ranging from support vector
machines to Gaussian processes and kernel embeddings of distributions.
Operators acting on such spaces are, for instance, required to embed
conditional probability distributions in order to implement the kernel Bayes
rule and build sequential data models. It was recently shown that transfer
operators such as the Perron-Frobenius or Koopman operator can also be
approximated in a similar fashion using covariance and cross-covariance
operators and that eigenfunctions of these operators can be obtained by solving
associated matrix eigenvalue problems. The goal of this paper is to provide a
solid functional analytic foundation for the eigenvalue decomposition of RKHS
operators and to extend the approach to the singular value decomposition. The
results are illustrated with simple guiding examples
An extension of Chebfun to two dimensions
An object-oriented MATLAB system is described that extends the capabilities of Chebfun to smooth functions of two variables defined on rectangles. Functions are approximated to essentially machine precision by using iterative Gaussian elimination with complete pivoting to form “chebfun2” objects representing low rank approximations. Operations such as integration, differentiation, function evaluation, and transforms are particularly efficient. Global optimization, the singular value decomposition, and rootfinding are also extended to chebfun2 objects. Numerical applications are presented
- …