25,517 research outputs found
Approximation of functions of large matrices with Kronecker structure
We consider the numerical approximation of where and is the sum of Kronecker products, that is . Here is a regular
function such that is well defined. We derive a computational
strategy that significantly lowers the memory requirements and computational
efforts of the standard approximations, with special emphasis on the
exponential function, for which the new procedure becomes particularly
advantageous. Our findings are illustrated by numerical experiments with
typical functions used in applications
Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods
In this paper, we develop a new tensor-product based preconditioner for
discontinuous Galerkin methods with polynomial degrees higher than those
typically employed. This preconditioner uses an automatic, purely algebraic
method to approximate the exact block Jacobi preconditioner by Kronecker
products of several small, one-dimensional matrices. Traditional matrix-based
preconditioners require storage and
computational work, where is the degree of basis polynomials used, and
is the spatial dimension. Our SVD-based tensor-product preconditioner requires
storage, work in two spatial
dimensions, and work in three spatial dimensions.
Combined with a matrix-free Newton-Krylov solver, these preconditioners allow
for the solution of DG systems in linear time in per degree of freedom in
2D, and reduce the computational complexity from to
in 3D. Numerical results are shown in 2D and 3D for the
advection and Euler equations, using polynomials of degree up to . For
many test cases, the preconditioner results in similar iteration counts when
compared with the exact block Jacobi preconditioner, and performance is
significantly improved for high polynomial degrees .Comment: 40 pages, 15 figure
Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP)
We introduce a new structured kernel interpolation (SKI) framework, which
generalises and unifies inducing point methods for scalable Gaussian processes
(GPs). SKI methods produce kernel approximations for fast computations through
kernel interpolation. The SKI framework clarifies how the quality of an
inducing point approach depends on the number of inducing (aka interpolation)
points, interpolation strategy, and GP covariance kernel. SKI also provides a
mechanism to create new scalable kernel methods, through choosing different
kernel interpolation strategies. Using SKI, with local cubic kernel
interpolation, we introduce KISS-GP, which is 1) more scalable than inducing
point alternatives, 2) naturally enables Kronecker and Toeplitz algebra for
substantial additional gains in scalability, without requiring any grid data,
and 3) can be used for fast and expressive kernel learning. KISS-GP costs O(n)
time and storage for GP inference. We evaluate KISS-GP for kernel matrix
approximation, kernel learning, and natural sound modelling.Comment: 19 pages, 4 figure
Fast matrix computations for functional additive models
It is common in functional data analysis to look at a set of related
functions: a set of learning curves, a set of brain signals, a set of spatial
maps, etc. One way to express relatedness is through an additive model, whereby
each individual function is assumed to be a variation
around some shared mean . Gaussian processes provide an elegant way of
constructing such additive models, but suffer from computational difficulties
arising from the matrix operations that need to be performed. Recently Heersink
& Furrer have shown that functional additive model give rise to covariance
matrices that have a specific form they called quasi-Kronecker (QK), whose
inverses are relatively tractable. We show that under additional assumptions
the two-level additive model leads to a class of matrices we call restricted
quasi-Kronecker, which enjoy many interesting properties. In particular, we
formulate matrix factorisations whose complexity scales only linearly in the
number of functions in latent field, an enormous improvement over the cubic
scaling of na\"ive approaches. We describe how to leverage the properties of
rQK matrices for inference in Latent Gaussian Models
The Lyapunov matrix equation. Matrix analysis from a computational perspective
Decay properties of the solution to the Lyapunov matrix equation are investigated. Their exploitation in the understanding of equation
matrix properties, and in the development of new numerical solution strategies
when is not low rank but possibly sparse is also briefly discussed.Comment: This work is a contribution to the Seminar series "Topics in
Mathematics", of the PhD Program of the Mathematics Department, Universit\`a
di Bologna, Ital
- …