25,517 research outputs found

    Approximation of functions of large matrices with Kronecker structure

    Full text link
    We consider the numerical approximation of f(A)bf({\cal A})b where b∈RNb\in{\mathbb R}^{N} and A\cal A is the sum of Kronecker products, that is A=M2⊗I+I⊗M1∈RN×N{\cal A}=M_2 \otimes I + I \otimes M_1\in{\mathbb R}^{N\times N}. Here ff is a regular function such that f(A)f({\cal A}) is well defined. We derive a computational strategy that significantly lowers the memory requirements and computational efforts of the standard approximations, with special emphasis on the exponential function, for which the new procedure becomes particularly advantageous. Our findings are illustrated by numerical experiments with typical functions used in applications

    Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods

    Full text link
    In this paper, we develop a new tensor-product based preconditioner for discontinuous Galerkin methods with polynomial degrees higher than those typically employed. This preconditioner uses an automatic, purely algebraic method to approximate the exact block Jacobi preconditioner by Kronecker products of several small, one-dimensional matrices. Traditional matrix-based preconditioners require O(p2d)\mathcal{O}(p^{2d}) storage and O(p3d)\mathcal{O}(p^{3d}) computational work, where pp is the degree of basis polynomials used, and dd is the spatial dimension. Our SVD-based tensor-product preconditioner requires O(pd+1)\mathcal{O}(p^{d+1}) storage, O(pd+1)\mathcal{O}(p^{d+1}) work in two spatial dimensions, and O(pd+2)\mathcal{O}(p^{d+2}) work in three spatial dimensions. Combined with a matrix-free Newton-Krylov solver, these preconditioners allow for the solution of DG systems in linear time in pp per degree of freedom in 2D, and reduce the computational complexity from O(p9)\mathcal{O}(p^9) to O(p5)\mathcal{O}(p^5) in 3D. Numerical results are shown in 2D and 3D for the advection and Euler equations, using polynomials of degree up to p=15p=15. For many test cases, the preconditioner results in similar iteration counts when compared with the exact block Jacobi preconditioner, and performance is significantly improved for high polynomial degrees pp.Comment: 40 pages, 15 figure

    Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP)

    Full text link
    We introduce a new structured kernel interpolation (SKI) framework, which generalises and unifies inducing point methods for scalable Gaussian processes (GPs). SKI methods produce kernel approximations for fast computations through kernel interpolation. The SKI framework clarifies how the quality of an inducing point approach depends on the number of inducing (aka interpolation) points, interpolation strategy, and GP covariance kernel. SKI also provides a mechanism to create new scalable kernel methods, through choosing different kernel interpolation strategies. Using SKI, with local cubic kernel interpolation, we introduce KISS-GP, which is 1) more scalable than inducing point alternatives, 2) naturally enables Kronecker and Toeplitz algebra for substantial additional gains in scalability, without requiring any grid data, and 3) can be used for fast and expressive kernel learning. KISS-GP costs O(n) time and storage for GP inference. We evaluate KISS-GP for kernel matrix approximation, kernel learning, and natural sound modelling.Comment: 19 pages, 4 figure

    Fast matrix computations for functional additive models

    Full text link
    It is common in functional data analysis to look at a set of related functions: a set of learning curves, a set of brain signals, a set of spatial maps, etc. One way to express relatedness is through an additive model, whereby each individual function gi(x)g_{i}\left(x\right) is assumed to be a variation around some shared mean f(x)f(x). Gaussian processes provide an elegant way of constructing such additive models, but suffer from computational difficulties arising from the matrix operations that need to be performed. Recently Heersink & Furrer have shown that functional additive model give rise to covariance matrices that have a specific form they called quasi-Kronecker (QK), whose inverses are relatively tractable. We show that under additional assumptions the two-level additive model leads to a class of matrices we call restricted quasi-Kronecker, which enjoy many interesting properties. In particular, we formulate matrix factorisations whose complexity scales only linearly in the number of functions in latent field, an enormous improvement over the cubic scaling of na\"ive approaches. We describe how to leverage the properties of rQK matrices for inference in Latent Gaussian Models

    The Lyapunov matrix equation. Matrix analysis from a computational perspective

    Full text link
    Decay properties of the solution XX to the Lyapunov matrix equation AX+XAT=DAX + X A^T = D are investigated. Their exploitation in the understanding of equation matrix properties, and in the development of new numerical solution strategies when DD is not low rank but possibly sparse is also briefly discussed.Comment: This work is a contribution to the Seminar series "Topics in Mathematics", of the PhD Program of the Mathematics Department, Universit\`a di Bologna, Ital
    • …
    corecore