22 research outputs found
Bayesian Posterior Contraction Rates for Linear Severely Ill-posed Inverse Problems
We consider a class of linear ill-posed inverse problems arising from
inversion of a compact operator with singular values which decay exponentially
to zero. We adopt a Bayesian approach, assuming a Gaussian prior on the unknown
function. If the observational noise is assumed to be Gaussian then this prior
is conjugate to the likelihood so that the posterior distribution is also
Gaussian. We study Bayesian posterior consistency in the small observational
noise limit. We assume that the forward operator and the prior and noise
covariance operators commute with one another. We show how, for given
smoothness assumptions on the truth, the scale parameter of the prior can be
adjusted to optimize the rate of posterior contraction to the truth, and we
explicitly compute the logarithmic rate.Comment: 25 pages, 2 figure
Analysis of the Gibbs sampler for hierarchical inverse problems
Many inverse problems arising in applications come from continuum models
where the unknown parameter is a field. In practice the unknown field is
discretized resulting in a problem in , with an understanding
that refining the discretization, that is increasing , will often be
desirable. In the context of Bayesian inversion this situation suggests the
importance of two issues: (i) defining hyper-parameters in such a way that they
are interpretable in the continuum limit and so that their
values may be compared between different discretization levels; (ii)
understanding the efficiency of algorithms for probing the posterior
distribution, as a function of large Here we address these two issues in
the context of linear inverse problems subject to additive Gaussian noise
within a hierarchical modelling framework based on a Gaussian prior for the
unknown field and an inverse-gamma prior for a hyper-parameter, namely the
amplitude of the prior variance. The structure of the model is such that the
Gibbs sampler can be easily implemented for probing the posterior distribution.
Subscribing to the dogma that one should think infinite-dimensionally before
implementing in finite dimensions, we present function space intuition and
provide rigorous theory showing that as increases, the component of the
Gibbs sampler for sampling the amplitude of the prior variance becomes
increasingly slower. We discuss a reparametrization of the prior variance that
is robust with respect to the increase in dimension; we give numerical
experiments which exhibit that our reparametrization prevents the slowing down.
Our intuition on the behaviour of the prior hyper-parameter, with and without
reparametrization, is sufficiently general to include a broad class of
nonlinear inverse problems as well as other families of hyper-priors.Comment: to appear, SIAM/ASA Journal on Uncertainty Quantificatio
Adaptive inference over Besov spaces in the white noise model using -exponential priors
In many scientific applications the aim is to infer a function which is
smooth in some areas, but rough or even discontinuous in other areas of its
domain. Such spatially inhomogeneous functions can be modelled in Besov spaces
with suitable integrability parameters. In this work we study adaptive Bayesian
inference over Besov spaces, in the white noise model from the point of view of
rates of contraction, using -exponential priors, which range between Laplace
and Gaussian and possess regularity and scaling hyper-parameters. To achieve
adaptation, we employ empirical and hierarchical Bayes approaches for tuning
these hyper-parameters. Our results show that, while it is known that Gaussian
priors can attain the minimax rate only in Besov spaces of spatially
homogeneous functions, Laplace priors attain the minimax or nearly the minimax
rate in both Besov spaces of spatially homogeneous functions and Besov spaces
permitting spatial inhomogeneities
Designing truncated priors for direct and inverse Bayesian problems
The Bayesian approach to inverse problems with functional unknowns, has
received significant attention in recent years. An important component of the
developing theory is the study of the asymptotic performance of the posterior
distribution in the frequentist setting. The present paper contributes to the
area of Bayesian inverse problems by formulating a posterior contraction theory
for linear inverse problems, with truncated Gaussian series priors, and under
general smoothness assumptions. Emphasis is on the intrinsic role of the
truncation point both for the direct as well as for the inverse problem, which
are related through the modulus of continuity as this was recently highlighted
by Knapik and Salomond (2018)
Rates of contraction of posterior distributions based on p-exponential priors
We consider a family of infinite dimensional product measures with tails between Gaussian and exponential, which we call p-exponential measures. We study their measure-theoretic properties and in particular their concentration. Our findings are used to develop a general contraction theory of posterior distributions on nonparametric models with p-exponential priors in separable Banach parameter spaces. Our approach builds on the general contraction theory for Gaussian process priors in (Ann. Statist. 36 (2008) 1435–1463), namely we use prior concentration to verify prior mass and entropy conditions sufficient for posterior contraction. However, the specific concentration properties of p-exponential priors lead to a more complex entropy bound which can influence negatively the obtained rate of contraction, depending on the topology of the parameter space. Subject to the more complex entropy bound, we show that the rate of contraction depends on the position of the true parameter relative to a certain Banach space associated to p-exponential measures and on the small ball probabilities of these measures. For example, we apply our theory in the white noise model under Besov regularity of the truth and obtain minimax rates of contraction using (rescaled) α-regular p-exponential priors. In particular, our results suggest that when interested in spatially inhomogeneous unknown functions, in terms of posterior contraction, it is preferable to use Laplace rather than Gaussian priors.
However, the specific concentration properties of -exponential priors lead to a more complex entropy bound which can influence negatively the obtained rate of contraction, depending on the topology of the parameter space. Subject to the more complex entropy bound, we show that the rate of contraction depends on the position of the true parameter relative to a certain Banach space associated to -exponential measures and on the small ball probabilities of these measures. For example, we apply our theory in the white noise model under Besov regularity of the truth and obtain minimax rates of contraction using (rescaled) -regular -exponential priors. In particular, our results suggest that when interested in spatially inhomogeneous unknown functions, in terms of posterior contraction, it is preferable to use Laplace rather than Gaussian priors
Posterior Contraction Rates for the Bayesian Approach to Linear Ill-Posed Inverse Problems
We consider a Bayesian nonparametric approach to a family of linear inverse
problems in a separable Hilbert space setting with Gaussian noise. We assume
Gaussian priors, which are conjugate to the model, and present a method of
identifying the posterior using its precision operator. Working with the
unbounded precision operator enables us to use partial differential equations
(PDE) methodology to obtain rates of contraction of the posterior distribution
to a Dirac measure centered on the true solution. Our methods assume a
relatively weak relation between the prior covariance, noise covariance and
forward operator, allowing for a wide range of applications
Aspects of Bayesian inverse problems
The goal of this thesis is to contribute to the formulation and understanding
of the Bayesian approach to inverse problems in function space. To this end
we examine two important aspects of this approach: the frequentist asymptotic
properties of the posterior, and the extraction of information from the posterior
via sampling. We work in a separable Hilbert space setting and consider Gaussian
priors on the unknown in conjugate Gaussian models. In the first part of this
work we consider linear inverse problems with Gaussian additive noise and study
the contraction in the small noise limit of the Gaussian posterior distribution to
a Dirac measure centered on the true parameter underlying the data. In a wide
range of situations, which include both mildly and severely ill-posed problems, we
show how carefully calibrating the scaling of the prior as a function of the size of
the noise, based on a priori known information on the regularity of the truth, yields
optimal rates of contraction. In the second part we study the implementation in
RN of hierarchical Bayesian linear inverse problems with Gaussian noise and priors,
and with hyper-parameters introduced through the scalings of the prior and noise
covariance operators. We use function space intuition to understand the large N
behaviour of algorithms designed to sample the posterior and show that the two
scaling hyper-parameters evolve under these algorithms in contrasting ways: as N
grows the prior scaling slows down while the noise scaling speeds up. We propose
a reparametrization of the prior scaling which is robust with respect to the increase
in dimension. Our theory on the slowing down of the evolution of the prior scaling
extends to hierarchical approaches in more general conjugate Gaussian settings,
while our intuition covers other parameters of the prior covariance operator as well.
Throughout the thesis we use a blend of results from measure theory and probability
theory with tools from the theory of linear partial differential equations and
numerical analysis
