602 research outputs found

    Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP)

    Full text link
    We introduce a new structured kernel interpolation (SKI) framework, which generalises and unifies inducing point methods for scalable Gaussian processes (GPs). SKI methods produce kernel approximations for fast computations through kernel interpolation. The SKI framework clarifies how the quality of an inducing point approach depends on the number of inducing (aka interpolation) points, interpolation strategy, and GP covariance kernel. SKI also provides a mechanism to create new scalable kernel methods, through choosing different kernel interpolation strategies. Using SKI, with local cubic kernel interpolation, we introduce KISS-GP, which is 1) more scalable than inducing point alternatives, 2) naturally enables Kronecker and Toeplitz algebra for substantial additional gains in scalability, without requiring any grid data, and 3) can be used for fast and expressive kernel learning. KISS-GP costs O(n) time and storage for GP inference. We evaluate KISS-GP for kernel matrix approximation, kernel learning, and natural sound modelling.Comment: 19 pages, 4 figure

    Multilevel Approach For Signal Restoration Problems With Toeplitz Matrices

    Get PDF
    We present a multilevel method for discrete ill-posed problems arising from the discretization of Fredholm integral equations of the first kind. In this method, we use the Haar wavelet transform to define restriction and prolongation operators within a multigrid-type iteration. The choice of the Haar wavelet operator has the advantage of preserving matrix structure, such as Toeplitz, between grids, which can be exploited to obtain faster solvers on each level where an edge-preserving Tikhonov regularization is applied. Finally, we present results that indicate the promise of this approach for restoration of signals and images with edges

    Sharp analysis of low-rank kernel matrix approximations

    Get PDF
    We consider supervised learning problems within the positive-definite kernel framework, such as kernel ridge regression, kernel logistic regression or the support vector machine. With kernels leading to infinite-dimensional feature spaces, a common practical limiting difficulty is the necessity of computing the kernel matrix, which most frequently leads to algorithms with running time at least quadratic in the number of observations n, i.e., O(n^2). Low-rank approximations of the kernel matrix are often considered as they allow the reduction of running time complexities to O(p^2 n), where p is the rank of the approximation. The practicality of such methods thus depends on the required rank p. In this paper, we show that in the context of kernel ridge regression, for approximations based on a random subset of columns of the original kernel matrix, the rank p may be chosen to be linear in the degrees of freedom associated with the problem, a quantity which is classically used in the statistical analysis of such methods, and is often seen as the implicit number of parameters of non-parametric estimators. This result enables simple algorithms that have sub-quadratic running time complexity, but provably exhibit the same predictive performance than existing algorithms, for any given problem instance, and not only for worst-case situations

    Constant-time Bilateral Filter using Spectral Decomposition

    Get PDF
    This paper presents an efficient constant-time bilateral filter where constant-time means that computational complexity is independent of filter window size. Many state-of-the-art constant-time methods approximate the original bilateral filter by an appropriate combination of a series of convolutions. It is important for this framework to optimize the performance tradeoff between approximate accuracy and the number of convolutions. The proposed method achieves the optimal performance tradeoff in a least-squares manner by using spectral decomposition under the assumption that images consist of discrete intensities such as 8-bit images. This approach is essentially applicable to arbitrary range kernel. Experiments show that the proposed method outperforms state-of-the-art methods in terms of both computational complexity and approximate accuracy
    corecore