2 research outputs found
Convergence rates in -regularization if the sparsity assumption fails
Variational sparsity regularization based on -norms and other
nonlinear functionals has gained enormous attention recently, both with respect
to its applications and its mathematical analysis. A focus in regularization
theory has been to develop error estimation in terms of regularization
parameter and noise strength. For this sake specific error measures such as
Bregman distances and specific conditions on the solution such as source
conditions or variational inequalities have been developed and used.
In this paper we provide, for a certain class of ill-posed linear operator
equations, a convergence analysis that works for solutions that are not
completely sparse, but have a fast decaying nonzero part. This case is not
covered by standard source conditions, but surprisingly can be treated with an
appropriate variational inequality. As a consequence the paper also provides
the first examples where the variational inequality approach, which was often
believed to be equivalent to appropriate source conditions, can indeed go
farther than the latter
Maximum-A-Posteriori Estimates in Linear Inverse Problems with Log-concave Priors are Proper Bayes Estimators
A frequent matter of debate in Bayesian inversion is the question, which of
the two principle point-estimators, the maximum-a-posteriori (MAP) or the
conditional mean (CM) estimate is to be preferred. As the MAP estimate
corresponds to the solution given by variational regularization techniques,
this is also a constant matter of debate between the two research areas.
Following a theoretical argument - the Bayes cost formalism - the CM estimate
is classically preferred for being the Bayes estimator for the mean squared
error cost while the MAP estimate is classically discredited for being only
asymptotically the Bayes estimator for the uniform cost function. In this
article we present recent theoretical and computational observations that
challenge this point of view, in particular for high-dimensional
sparsity-promoting Bayesian inversion. Using Bregman distances, we present new,
proper convex Bayes cost functions for which the MAP estimator is the Bayes
estimator. We complement this finding by results that correct further common
misconceptions about MAP estimates. In total, we aim to rehabilitate MAP
estimates in linear inverse problems with log-concave priors as proper Bayes
estimators