23 research outputs found

    Convergence rates in expectation for Tikhonov-type regularization of Inverse Problems with Poisson data

    Full text link
    In this paper we study a Tikhonov-type method for ill-posed nonlinear operator equations \gdag = F( ag) where \gdag is an integrable, non-negative function. We assume that data are drawn from a Poisson process with density t\gdag where t>0t>0 may be interpreted as an exposure time. Such problems occur in many photonic imaging applications including positron emission tomography, confocal fluorescence microscopy, astronomic observations, and phase retrieval problems in optics. Our approach uses a Kullback-Leibler-type data fidelity functional and allows for general convex penalty terms. We prove convergence rates of the expectation of the reconstruction error under a variational source condition as t→∞t\to\infty both for an a priori and for a Lepski{\u\i}-type parameter choice rule

    Iteratively regularized Newton-type methods for general data misfit functionals and applications to Poisson data

    Get PDF
    We study Newton type methods for inverse problems described by nonlinear operator equations F(u)=gF(u)=g in Banach spaces where the Newton equations Fâ€Č(un;un+1−un)=g−F(un)F'(u_n;u_{n+1}-u_n) = g-F(u_n) are regularized variationally using a general data misfit functional and a convex regularization term. This generalizes the well-known iteratively regularized Gauss-Newton method (IRGNM). We prove convergence and convergence rates as the noise level tends to 0 both for an a priori stopping rule and for a Lepski{\u\i}-type a posteriori stopping rule. Our analysis includes previous order optimal convergence rate results for the IRGNM as special cases. The main focus of this paper is on inverse problems with Poisson data where the natural data misfit functional is given by the Kullback-Leibler divergence. Two examples of such problems are discussed in detail: an inverse obstacle scattering problem with amplitude data of the far-field pattern and a phase retrieval problem. The performence of the proposed method for these problems is illustrated in numerical examples

    Necessary conditions for variational regularization schemes

    Full text link
    We study variational regularization methods in a general framework, more precisely those methods that use a discrepancy and a regularization functional. While several sets of sufficient conditions are known to obtain a regularization method, we start with an investigation of the converse question: How could necessary conditions for a variational method to provide a regularization method look like? To this end, we formalize the notion of a variational scheme and start with comparison of three different instances of variational methods. Then we focus on the data space model and investigate the role and interplay of the topological structure, the convergence notion and the discrepancy functional. Especially, we deduce necessary conditions for the discrepancy functional to fulfill usual continuity assumptions. The results are applied to discrepancy functionals given by Bregman distances and especially to the Kullback-Leibler divergence.Comment: To appear in Inverse Problem

    Inverse problems with Poisson data: Statistical regularization theory, applications and algorithms.

    No full text
    Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years
    corecore