21,492 research outputs found
A Tutorial on Bayesian Nonparametric Models
A key problem in statistical modeling is model selection, how to choose a
model at an appropriate level of complexity. This problem appears in many
settings, most prominently in choosing the number ofclusters in mixture models
or the number of factors in factor analysis. In this tutorial we describe
Bayesian nonparametric methods, a class of methods that side-steps this issue
by allowing the data to determine the complexity of the model. This tutorial is
a high-level introduction to Bayesian nonparametric methods and contains
several examples of their application.Comment: 28 pages, 8 figure
Iterative Updating of Model Error for Bayesian Inversion
In computational inverse problems, it is common that a detailed and accurate
forward model is approximated by a computationally less challenging substitute.
The model reduction may be necessary to meet constraints in computing time when
optimization algorithms are used to find a single estimate, or to speed up
Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use
of an approximate model introduces a discrepancy, or modeling error, that may
have a detrimental effect on the solution of the ill-posed inverse problem, or
it may severely distort the estimate of the posterior distribution. In the
Bayesian paradigm, the modeling error can be considered as a random variable,
and by using an estimate of the probability distribution of the unknown, one
may estimate the probability distribution of the modeling error and incorporate
it into the inversion. We introduce an algorithm which iterates this idea to
update the distribution of the model error, leading to a sequence of posterior
distributions that are demonstrated empirically to capture the underlying truth
with increasing accuracy. Since the algorithm is not based on rejections, it
requires only limited full model evaluations.
We show analytically that, in the linear Gaussian case, the algorithm
converges geometrically fast with respect to the number of iterations. For more
general models, we introduce particle approximations of the iteratively
generated sequence of distributions; we also prove that each element of the
sequence converges in the large particle limit. We show numerically that, as in
the linear case, rapid convergence occurs with respect to the number of
iterations. Additionally, we show through computed examples that point
estimates obtained from this iterative algorithm are superior to those obtained
by neglecting the model error.Comment: 39 pages, 9 figure
- …