53,724 research outputs found
Stochastic expectation propagation
Expectation propagation (EP) is a deterministic approximation algorithm that
is often used to perform approximate Bayesian parameter learning. EP
approximates the full intractable posterior distribution through a set of local
approximations that are iteratively refined for each datapoint. EP can offer
analytic and computational advantages over other approximations, such as
Variational Inference (VI), and is the method of choice for a number of models.
The local nature of EP appears to make it an ideal candidate for performing
Bayesian learning on large models in large-scale dataset settings. However, EP
has a crucial limitation in this context: the number of approximating factors
needs to increase with the number of data-points, N, which often entails a
prohibitively large memory overhead. This paper presents an extension to EP,
called stochastic expectation propagation (SEP), that maintains a global
posterior approximation (like VI) but updates it in a local way (like EP).
Experiments on a number of canonical learning problems using synthetic and
real-world datasets indicate that SEP performs almost as well as full EP, but
reduces the memory consumption by a factor of . SEP is therefore ideally
suited to performing approximate Bayesian learning in the large model, large
dataset setting
Differentially private stochastic expectation propagation (DP-SEP)
We are interested in privatizing an approximate posterior inference algorithm
called Expectation Propagation (EP). EP approximates the posterior by
iteratively refining approximations to the local likelihoods, and is known to
provide better posterior uncertainties than those by variational inference
(VI). However, EP needs a large memory to maintain all local approximates
associated with each datapoint in the training data. To overcome this
challenge, stochastic expectation propagation (SEP) considers a single unique
local factor that captures the average effect of each likelihood term to the
posterior and refines it in a way analogous to EP. In terms of privacy, SEP is
more tractable than EP because at each refining step of a factor, the remaining
factors are fixed and do not depend on other datapoints as in EP, which makes
the sensitivity analysis straightforward. We provide a theoretical analysis of
the privacy-accuracy trade-off in the posterior estimates under our method,
called differentially private stochastic expectation propagation (DP-SEP).
Furthermore, we demonstrate the performance of our DP-SEP algorithm evaluated
on both synthetic and real-world datasets in terms of the quality of posterior
estimates at different levels of guaranteed privacy
Stochastic Expectation Propagation for Large Scale Gaussian Process Classification
A method for large scale Gaussian process classification has been recently
proposed based on expectation propagation (EP). Such a method allows Gaussian
process classifiers to be trained on very large datasets that were out of the
reach of previous deployments of EP and has been shown to be competitive with
related techniques based on stochastic variational inference. Nevertheless, the
memory resources required scale linearly with the dataset size, unlike in
variational methods. This is a severe limitation when the number of instances
is very large. Here we show that this problem is avoided when stochastic EP is
used to train the model
Training Deep Gaussian Processes using Stochastic Expectation Propagation and Probabilistic Backpropagation
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations
of Gaussian processes (GPs) and are formally equivalent to neural networks with
multiple, infinitely wide hidden layers. DGPs are probabilistic and
non-parametric and as such are arguably more flexible, have a greater capacity
to generalise, and provide better calibrated uncertainty estimates than
alternative deep models. The focus of this paper is scalable approximate
Bayesian learning of these networks. The paper develops a novel and efficient
extension of probabilistic backpropagation, a state-of-the-art method for
training Bayesian neural networks, that can be used to train DGPs. The new
method leverages a recently proposed method for scaling Expectation
Propagation, called stochastic Expectation Propagation. The method is able to
automatically discover useful input warping, expansion or compression, and it
is therefore is a flexible form of Bayesian kernel design. We demonstrate the
success of the new method for supervised learning on several real-world
datasets, showing that it typically outperforms GP regression and is never much
worse
- …