42 research outputs found
Bayesian leave-one-out cross-validation for large data
Model inference, such as model comparison, model checking, and model
selection, is an important part of model development. Leave-one-out
cross-validation (LOO) is a general approach for assessing the generalizability
of a model, but unfortunately, LOO does not scale well to large datasets. We
propose a combination of using approximate inference techniques and
probability-proportional-to-size-sampling (PPS) for fast LOO model evaluation
for large datasets. We provide both theoretical and empirical results showing
good properties for large data.Comment: Accepted to ICML 2019. This version is the submitted pape
Distilling importance sampling
The two main approaches to Bayesian inference are sampling and optimisation
methods. However many complicated posteriors are difficult to approximate by
either. Therefore we propose a novel approach combining features of both. We
use a flexible parameterised family of densities, such as a normalising flow.
Given a density from this family approximating the posterior, we use importance
sampling to produce a weighted sample from a more accurate posterior
approximation. This sample is then used in optimisation to update the
parameters of the approximate density, which we view as distilling the
importance sampling results. We iterate these steps and gradually improve the
quality of the posterior approximation. We illustrate our method in two
challenging examples: a queueing model and a stochastic differential equation
model.Comment: This version adds a second application, and fixes some minor error