4,702 research outputs found
Iteratively regularized Newton-type methods for general data misfit functionals and applications to Poisson data
We study Newton type methods for inverse problems described by nonlinear
operator equations in Banach spaces where the Newton equations
are regularized variationally using a general
data misfit functional and a convex regularization term. This generalizes the
well-known iteratively regularized Gauss-Newton method (IRGNM). We prove
convergence and convergence rates as the noise level tends to 0 both for an a
priori stopping rule and for a Lepski{\u\i}-type a posteriori stopping rule.
Our analysis includes previous order optimal convergence rate results for the
IRGNM as special cases. The main focus of this paper is on inverse problems
with Poisson data where the natural data misfit functional is given by the
Kullback-Leibler divergence. Two examples of such problems are discussed in
detail: an inverse obstacle scattering problem with amplitude data of the
far-field pattern and a phase retrieval problem. The performence of the
proposed method for these problems is illustrated in numerical examples
Convergence rates in expectation for Tikhonov-type regularization of Inverse Problems with Poisson data
In this paper we study a Tikhonov-type method for ill-posed nonlinear
operator equations \gdag = F(
ag) where \gdag is an integrable,
non-negative function. We assume that data are drawn from a Poisson process
with density t\gdag where may be interpreted as an exposure time. Such
problems occur in many photonic imaging applications including positron
emission tomography, confocal fluorescence microscopy, astronomic observations,
and phase retrieval problems in optics. Our approach uses a
Kullback-Leibler-type data fidelity functional and allows for general convex
penalty terms. We prove convergence rates of the expectation of the
reconstruction error under a variational source condition as both
for an a priori and for a Lepski{\u\i}-type parameter choice rule
Variational Data Assimilation via Sparse Regularization
This paper studies the role of sparse regularization in a properly chosen
basis for variational data assimilation (VDA) problems. Specifically, it
focuses on data assimilation of noisy and down-sampled observations while the
state variable of interest exhibits sparsity in the real or transformed domain.
We show that in the presence of sparsity, the -norm regularization
produces more accurate and stable solutions than the classic data assimilation
methods. To motivate further developments of the proposed methodology,
assimilation experiments are conducted in the wavelet and spectral domain using
the linear advection-diffusion equation
- …