47 research outputs found
Practical bounds on the error of Bayesian posterior approximations: A nonasymptotic approach
Bayesian inference typically requires the computation of an approximation to
the posterior distribution. An important requirement for an approximate
Bayesian inference algorithm is to output high-accuracy posterior mean and
uncertainty estimates. Classical Monte Carlo methods, particularly Markov Chain
Monte Carlo, remain the gold standard for approximate Bayesian inference
because they have a robust finite-sample theory and reliable convergence
diagnostics. However, alternative methods, which are more scalable or apply to
problems where Markov Chain Monte Carlo cannot be used, lack the same
finite-data approximation theory and tools for evaluating their accuracy. In
this work, we develop a flexible new approach to bounding the error of mean and
uncertainty estimates of scalable inference algorithms. Our strategy is to
control the estimation errors in terms of Wasserstein distance, then bound the
Wasserstein distance via a generalized notion of Fisher distance. Unlike
computing the Wasserstein distance, which requires access to the normalized
posterior distribution, the Fisher distance is tractable to compute because it
requires access only to the gradient of the log posterior density. We
demonstrate the usefulness of our Fisher distance approach by deriving bounds
on the Wasserstein error of the Laplace approximation and Hilbert coresets. We
anticipate that our approach will be applicable to many other approximate
inference methods such as the integrated Laplace approximation, variational
inference, and approximate Bayesian computationComment: 22 pages, 2 figure
Recommended from our members
Data Summarizations for Scalable, Robust and Privacy-Aware Learning in High Dimensions
The advent of large-scale datasets has offered unprecedented amounts of information for building statistically powerful machines, but, at the same time, also introduced a remarkable computational challenge: how can we efficiently process massive data? This thesis presents a suite of data reduction methods that make learning algorithms scale on large datasets, via extracting a succinct model-specific representation that summarizes the
full data collection—a coreset. Our frameworks support by design datasets of arbitrary dimensionality, and can be used for general purpose Bayesian inference under real-world constraints, including privacy preservation and robustness to outliers, encompassing diverse uncertainty-aware data analysis tasks, such as density estimation, classification
and regression.
We motivate the necessity for novel data reduction techniques in the first place by developing a reidentification attack on coarsened representations of private behavioural data. Analysing longitudinal records of human mobility, we detect privacy-revealing structural patterns, that remain preserved in reduced graph representations of individuals’ information with manageable size. These unique patterns enable mounting linkage attacks via structural similarity computations on longitudinal mobility traces, revealing an overlooked, yet existing, privacy threat.
We then propose a scalable variational inference scheme for approximating posteriors on large datasets via learnable weighted pseudodata, termed pseudocoresets. We show that the use of pseudodata enables overcoming the constraints on minimum summary size for given approximation quality, that are imposed on all existing Bayesian coreset constructions due to data dimensionality. Moreover, it allows us to develop a scheme for pseudocoresets-based summarization that satisfies the standard framework of differential privacy by construction; in this way, we can release reduced size privacy-preserving representations for sensitive datasets that are amenable to arbitrary post-processing.
Subsequently, we consider summarizations for large-scale Bayesian inference in scenarios when observed datapoints depart from the statistical assumptions of our model. Using robust divergences, we develop a method for constructing coresets resilient to model misspecification. Crucially, this method is able to automatically discard outliers from the generated data summaries. Thus we deliver robustified scalable representations
for inference, that are suitable for applications involving contaminated and unreliable data sources.
We demonstrate the performance of proposed summarization techniques on multiple parametric statistical models, and diverse simulated and real-world datasets, from music genre features to hospital readmission records, considering a wide range of data dimensionalities.Nokia Bell Labs,
Lundgren Fund,
Darwin College, University of Cambridge
Department of Computer Science & Technology, University of Cambridg
Bayes Hilbert Spaces for Posterior Approximation
Performing inference in Bayesian models requires sampling algorithms to draw
samples from the posterior. This becomes prohibitively expensive as the size of
data sets increase. Constructing approximations to the posterior which are
cheap to evaluate is a popular approach to circumvent this issue. This begs the
question of what is an appropriate space to perform approximation of Bayesian
posterior measures. This manuscript studies the application of Bayes Hilbert
spaces to the posterior approximation problem. Bayes Hilbert spaces are studied
in functional data analysis in the context where observed functions are
probability density functions and their application to computational Bayesian
problems is in its infancy. This manuscript shall outline Bayes Hilbert spaces
and their connection to Bayesian computation, in particular novel connections
between Bayes Hilbert spaces, Bayesian coreset algorithms and kernel-based
distances
Black-box Coreset Variational Inference
Recent advances in coreset methods have shown that a selection of
representative datapoints can replace massive volumes of data for Bayesian
inference, preserving the relevant statistical information and significantly
accelerating subsequent downstream tasks. Existing variational coreset
constructions rely on either selecting subsets of the observed datapoints, or
jointly performing approximate inference and optimizing pseudodata in the
observed space akin to inducing points methods in Gaussian Processes. So far,
both approaches are limited by complexities in evaluating their objectives for
general purpose models, and require generating samples from a typically
intractable posterior over the coreset throughout inference and testing. In
this work, we present a black-box variational inference framework for coresets
that overcomes these constraints and enables principled application of
variational coresets to intractable models, such as Bayesian neural networks.
We apply our techniques to supervised learning problems, and compare them with
existing approaches in the literature for data summarization and inference.Comment: NeurIPS 202