23,723 research outputs found
Preconditioning Kernel Matrices
The computational and storage complexity of kernel machines presents the
primary barrier to their scaling to large, modern, datasets. A common way to
tackle the scalability issue is to use the conjugate gradient algorithm, which
relieves the constraints on both storage (the kernel matrix need not be stored)
and computation (both stochastic gradients and parallelization can be used).
Even so, conjugate gradient is not without its own issues: the conditioning of
kernel matrices is often such that conjugate gradients will have poor
convergence in practice. Preconditioning is a common approach to alleviating
this issue. Here we propose preconditioned conjugate gradients for kernel
machines, and develop a broad range of preconditioners particularly useful for
kernel matrices. We describe a scalable approach to both solving kernel
machines and learning their hyperparameters. We show this approach is exact in
the limit of iterations and outperforms state-of-the-art approximations for a
given computational budget
Large-scale Heteroscedastic Regression via Gaussian Process
Heteroscedastic regression considering the varying noises among observations
has many applications in the fields like machine learning and statistics. Here
we focus on the heteroscedastic Gaussian process (HGP) regression which
integrates the latent function and the noise function together in a unified
non-parametric Bayesian framework. Though showing remarkable performance, HGP
suffers from the cubic time complexity, which strictly limits its application
to big data. To improve the scalability, we first develop a variational sparse
inference algorithm, named VSHGP, to handle large-scale datasets. Furthermore,
two variants are developed to improve the scalability and capability of VSHGP.
The first is stochastic VSHGP (SVSHGP) which derives a factorized evidence
lower bound, thus enhancing efficient stochastic variational inference. The
second is distributed VSHGP (DVSHGP) which (i) follows the Bayesian committee
machine formalism to distribute computations over multiple local VSHGP experts
with many inducing points; and (ii) adopts hybrid parameters for experts to
guard against over-fitting and capture local variety. The superiority of DVSHGP
and SVSHGP as compared to existing scalable heteroscedastic/homoscedastic GPs
is then extensively verified on various datasets.Comment: 14 pages, 15 figure
Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification
Gaussian processes are a natural way of defining prior distributions over
functions of one or more input variables. In a simple nonparametric regression
problem, where such a function gives the mean of a Gaussian distribution for an
observed response, a Gaussian process model can easily be implemented using
matrix computations that are feasible for datasets of up to about a thousand
cases. Hyperparameters that define the covariance function of the Gaussian
process can be sampled using Markov chain methods. Regression models where the
noise has a t distribution and logistic or probit models for classification
applications can be implemented by sampling as well for latent values
underlying the observations. Software is now available that implements these
methods using covariance functions with hierarchical parameterizations. Models
defined in this way can discover high-level properties of the data, such as
which inputs are relevant to predicting the response
Distributed Gaussian Processes
To scale Gaussian processes (GPs) to large data sets we introduce the robust
Bayesian Committee Machine (rBCM), a practical and scalable product-of-experts
model for large-scale distributed GP regression. Unlike state-of-the-art sparse
GP approximations, the rBCM is conceptually simple and does not rely on
inducing or variational parameters. The key idea is to recursively distribute
computations to independent computational units and, subsequently, recombine
them to form an overall result. Efficient closed-form inference allows for
straightforward parallelisation and distributed computations with a small
memory footprint. The rBCM is independent of the computational graph and can be
used on heterogeneous computing infrastructures, ranging from laptops to
clusters. With sufficient computing resources our distributed GP model can
handle arbitrarily large data sets.Comment: 10 pages, 5 figures. Appears in Proceedings of ICML 201
- …