5,086 research outputs found
A Tutorial on Sparse Gaussian Processes and Variational Inference
Gaussian processes (GPs) provide a framework for Bayesian inference that can
offer principled uncertainty estimates for a large range of problems. For
example, if we consider regression problems with Gaussian likelihoods, a GP
model enjoys a posterior in closed form. However, identifying the posterior GP
scales cubically with the number of training examples and requires to store all
examples in memory. In order to overcome these obstacles, sparse GPs have been
proposed that approximate the true posterior GP with pseudo-training examples.
Importantly, the number of pseudo-training examples is user-defined and enables
control over computational and memory complexity. In the general case, sparse
GPs do not enjoy closed-form solutions and one has to resort to approximate
inference. In this context, a convenient choice for approximate inference is
variational inference (VI), where the problem of Bayesian inference is cast as
an optimization problem -- namely, to maximize a lower bound of the log
marginal likelihood. This paves the way for a powerful and versatile framework,
where pseudo-training examples are treated as optimization arguments of the
approximate posterior that are jointly identified together with hyperparameters
of the generative model (i.e. prior and likelihood). The framework can
naturally handle a wide scope of supervised learning problems, ranging from
regression with heteroscedastic and non-Gaussian likelihoods to classification
problems with discrete labels, but also multilabel problems. The purpose of
this tutorial is to provide access to the basic matter for readers without
prior knowledge in both GPs and VI. A proper exposition to the subject enables
also access to more recent advances (like importance-weighted VI as well as
interdomain, multioutput and deep GPs) that can serve as an inspiration for new
research ideas
Orthogonally Decoupled Variational Gaussian Processes
Gaussian processes (GPs) provide a powerful non-parametric framework for
reasoning over functions. Despite appealing theory, its superlinear
computational and memory complexities have presented a long-standing challenge.
State-of-the-art sparse variational inference methods trade modeling accuracy
against complexity. However, the complexities of these methods still scale
superlinearly in the number of basis functions, implying that that sparse GP
methods are able to learn from large datasets only when a small model is used.
Recently, a decoupled approach was proposed that removes the unnecessary
coupling between the complexities of modeling the mean and the covariance
functions of a GP. It achieves a linear complexity in the number of mean
parameters, so an expressive posterior mean function can be modeled. While
promising, this approach suffers from optimization difficulties due to
ill-conditioning and non-convexity. In this work, we propose an alternative
decoupled parametrization. It adopts an orthogonal basis in the mean function
to model the residues that cannot be learned by the standard coupled approach.
Therefore, our method extends, rather than replaces, the coupled approach to
achieve strictly better performance. This construction admits a straightforward
natural gradient update rule, so the structure of the information manifold that
is lost during decoupling can be leveraged to speed up learning. Empirically,
our algorithm demonstrates significantly faster convergence in multiple
experiments.Comment: Appearing NIPS 201
Sparse Linear Identifiable Multivariate Modeling
In this paper we consider sparse and identifiable linear latent variable
(factor) and linear Bayesian network models for parsimonious analysis of
multivariate data. We propose a computationally efficient method for joint
parameter and model inference, and model comparison. It consists of a fully
Bayesian hierarchy for sparse models using slab and spike priors (two-component
delta-function and continuous mixtures), non-Gaussian latent factors and a
stochastic search over the ordering of the variables. The framework, which we
call SLIM (Sparse Linear Identifiable Multivariate modeling), is validated and
bench-marked on artificial and real biological data sets. SLIM is closest in
spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in
inference, Bayesian network structure learning and model comparison.
Experimentally, SLIM performs equally well or better than LiNGAM with
comparable computational complexity. We attribute this mainly to the stochastic
search strategy used, and to parsimony (sparsity and identifiability), which is
an explicit part of the model. We propose two extensions to the basic i.i.d.
linear framework: non-linear dependence on observed variables, called SNIM
(Sparse Non-linear Identifiable Multivariate modeling) and allowing for
correlations between latent variables, called CSLIM (Correlated SLIM), for the
temporal and/or spatial data. The source code and scripts are available from
http://cogsys.imm.dtu.dk/slim/.Comment: 45 pages, 17 figure
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Spatial process models for analyzing geostatistical data entail computations
that become prohibitive as the number of spatial locations become large. This
manuscript develops a class of highly scalable Nearest Neighbor Gaussian
Process (NNGP) models to provide fully model-based inference for large
geostatistical datasets. We establish that the NNGP is a well-defined spatial
process providing legitimate finite-dimensional Gaussian densities with sparse
precision matrices. We embed the NNGP as a sparsity-inducing prior within a
rich hierarchical modeling framework and outline how computationally efficient
Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or
decomposing large matrices. The floating point operations (flops) per iteration
of this algorithm is linear in the number of spatial locations, thereby
rendering substantial scalability. We illustrate the computational and
inferential benefits of the NNGP over competing methods using simulation
studies and also analyze forest biomass from a massive United States Forest
Inventory dataset at a scale that precludes alternative dimension-reducing
methods
High-Dimensional Bayesian Geostatistics
With the growing capabilities of Geographic Information Systems (GIS) and
user-friendly software, statisticians today routinely encounter geographically
referenced data containing observations from a large number of spatial
locations and time points. Over the last decade, hierarchical spatiotemporal
process models have become widely deployed statistical tools for researchers to
better understand the complex nature of spatial and temporal variability.
However, fitting hierarchical spatiotemporal models often involves expensive
matrix computations with complexity increasing in cubic order for the number of
spatial locations and temporal points. This renders such models unfeasible for
large data sets. This article offers a focused review of two methods for
constructing well-defined highly scalable spatiotemporal stochastic processes.
Both these processes can be used as "priors" for spatiotemporal random fields.
The first approach constructs a low-rank process operating on a
lower-dimensional subspace. The second approach constructs a Nearest-Neighbor
Gaussian Process (NNGP) that ensures sparse precision matrices for its finite
realizations. Both processes can be exploited as a scalable prior embedded
within a rich hierarchical modeling framework to deliver full Bayesian
inference. These approaches can be described as model-based solutions for big
spatiotemporal datasets. The models ensure that the algorithmic complexity has
floating point operations (flops), where the number of spatial
locations (per iteration). We compare these methods and provide some insight
into their methodological underpinnings
- …