57 research outputs found

    Discretization of variational regularization in Banach spaces

    Full text link
    Consider a nonlinear ill-posed operator equation F(u)=yF(u)=y where FF is defined on a Banach space XX. In general, for solving this equation numerically, a finite dimensional approximation of XX and an approximation of FF are required. Moreover, in general the given data \yd of yy are noisy. In this paper we analyze finite dimensional variational regularization, which takes into account operator approximations and noisy data: We show (semi-)convergence of the regularized solution of the finite dimensional problems and establish convergence rates in terms of Bregman distances under appropriate sourcewise representation of a solution of the equation. The more involved case of regularization in nonseparable Banach spaces is discussed in detail. In particular we consider the space of finite total variation functions, the space of functions of finite bounded deformation, and the L∞L^\infty--space

    An entropic Landweber method for linear ill-posed problems

    Get PDF
    The aim of this paper is to investigate the use of a Landweber-type method involving the Shannon entropy for the regularization of linear ill-posed problems. We derive a closed form solution for the iterates and analyze their convergence behaviour both in a case of reconstructing general nonnegative unknowns as well as for the sake of recovering probability distributions. Moreover, we discuss several variants of the algorithm and relations to other methods in the literature. The effectiveness of the approach is studied numerically in several examples

    Necessary conditions for variational regularization schemes

    Full text link
    We study variational regularization methods in a general framework, more precisely those methods that use a discrepancy and a regularization functional. While several sets of sufficient conditions are known to obtain a regularization method, we start with an investigation of the converse question: How could necessary conditions for a variational method to provide a regularization method look like? To this end, we formalize the notion of a variational scheme and start with comparison of three different instances of variational methods. Then we focus on the data space model and investigate the role and interplay of the topological structure, the convergence notion and the discrepancy functional. Especially, we deduce necessary conditions for the discrepancy functional to fulfill usual continuity assumptions. The results are applied to discrepancy functionals given by Bregman distances and especially to the Kullback-Leibler divergence.Comment: To appear in Inverse Problem

    On regularization methods of EM-Kaczmarz type

    Full text link
    We consider regularization methods of Kaczmarz type in connection with the expectation-maximization (EM) algorithm for solving ill-posed equations. For noisy data, our methods are stabilized extensions of the well established ordered-subsets expectation-maximization iteration (OS-EM). We show monotonicity properties of the methods and present a numerical experiment which indicates that the extended OS-EM methods we propose are much faster than the standard EM algorithm.Comment: 18 pages, 6 figures; On regularization methods of EM-Kaczmarz typ

    Optimal Convergence Rates for Tikhonov Regularization in Besov Scales

    Full text link
    In this paper we deal with linear inverse problems and convergence rates for Tikhonov regularization. We consider regularization in a scale of Banach spaces, namely the scale of Besov spaces. We show that regularization in Banach scales differs from regularization in Hilbert scales in the sense that it is possible that stronger source conditions may lead to weaker convergence rates and vive versa. Moreover, we present optimal source conditions for regularization in Besov scales

    Iteratively regularized Newton-type methods for general data misfit functionals and applications to Poisson data

    Get PDF
    We study Newton type methods for inverse problems described by nonlinear operator equations F(u)=gF(u)=g in Banach spaces where the Newton equations F′(un;un+1−un)=g−F(un)F'(u_n;u_{n+1}-u_n) = g-F(u_n) are regularized variationally using a general data misfit functional and a convex regularization term. This generalizes the well-known iteratively regularized Gauss-Newton method (IRGNM). We prove convergence and convergence rates as the noise level tends to 0 both for an a priori stopping rule and for a Lepski{\u\i}-type a posteriori stopping rule. Our analysis includes previous order optimal convergence rate results for the IRGNM as special cases. The main focus of this paper is on inverse problems with Poisson data where the natural data misfit functional is given by the Kullback-Leibler divergence. Two examples of such problems are discussed in detail: an inverse obstacle scattering problem with amplitude data of the far-field pattern and a phase retrieval problem. The performence of the proposed method for these problems is illustrated in numerical examples

    Sparse Regularization with lql^q Penalty Term

    Full text link
    We consider the stable approximation of sparse solutions to non-linear operator equations by means of Tikhonov regularization with a subquadratic penalty term. Imposing certain assumptions, which for a linear operator are equivalent to the standard range condition, we derive the usual convergence rate O(δ)O(\sqrt{\delta}) of the regularized solutions in dependence of the noise level δ\delta. Particular emphasis lies on the case, where the true solution is known to have a sparse representation in a given basis. In this case, if the differential of the operator satisfies a certain injectivity condition, we can show that the actual convergence rate improves up to O(δ)O(\delta).Comment: 15 page

    Existence and approximation of fixed points of right Bregman nonexpansive operators

    Get PDF
    We study the existence and approximation of fixed points of right Bregman nonexpansive operators in reflexive Banach space. We present, in particular, necessary and sufficient conditions for the existence of fixed points and an implicit scheme for approximating them
    • …
    corecore