3,731 research outputs found
Hyperparameter Estimation in Bayesian MAP Estimation: Parameterizations and Consistency
The Bayesian formulation of inverse problems is attractive for three primary
reasons: it provides a clear modelling framework; means for uncertainty
quantification; and it allows for principled learning of hyperparameters. The
posterior distribution may be explored by sampling methods, but for many
problems it is computationally infeasible to do so. In this situation maximum a
posteriori (MAP) estimators are often sought. Whilst these are relatively cheap
to compute, and have an attractive variational formulation, a key drawback is
their lack of invariance under change of parameterization. This is a
particularly significant issue when hierarchical priors are employed to learn
hyperparameters. In this paper we study the effect of the choice of
parameterization on MAP estimators when a conditionally Gaussian hierarchical
prior distribution is employed. Specifically we consider the centred
parameterization, the natural parameterization in which the unknown state is
solved for directly, and the noncentred parameterization, which works with a
whitened Gaussian as the unknown state variable, and arises when considering
dimension-robust MCMC algorithms; MAP estimation is well-defined in the
nonparametric setting only for the noncentred parameterization. However, we
show that MAP estimates based on the noncentred parameterization are not
consistent as estimators of hyperparameters; conversely, we show that limits of
finite-dimensional centred MAP estimators are consistent as the dimension tends
to infinity. We also consider empirical Bayesian hyperparameter estimation,
show consistency of these estimates, and demonstrate that they are more robust
with respect to noise than centred MAP estimates. An underpinning concept
throughout is that hyperparameters may only be recovered up to measure
equivalence, a well-known phenomenon in the context of the Ornstein-Uhlenbeck
process.Comment: 36 pages, 8 figure
- …