1,050 research outputs found

    Necessary conditions for variational regularization schemes

    Full text link
    We study variational regularization methods in a general framework, more precisely those methods that use a discrepancy and a regularization functional. While several sets of sufficient conditions are known to obtain a regularization method, we start with an investigation of the converse question: How could necessary conditions for a variational method to provide a regularization method look like? To this end, we formalize the notion of a variational scheme and start with comparison of three different instances of variational methods. Then we focus on the data space model and investigate the role and interplay of the topological structure, the convergence notion and the discrepancy functional. Especially, we deduce necessary conditions for the discrepancy functional to fulfill usual continuity assumptions. The results are applied to discrepancy functionals given by Bregman distances and especially to the Kullback-Leibler divergence.Comment: To appear in Inverse Problem

    Convergence Rates for Inverse Problems with Impulsive Noise

    Full text link
    We study inverse problems F(f) = g with perturbed right hand side g^{obs} corrupted by so-called impulsive noise, i.e. noise which is concentrated on a small subset of the domain of definition of g. It is well known that Tikhonov-type regularization with an L^1 data fidelity term yields significantly more accurate results than Tikhonov regularization with classical L^2 data fidelity terms for this type of noise. The purpose of this paper is to provide a convergence analysis explaining this remarkable difference in accuracy. Our error estimates significantly improve previous error estimates for Tikhonov regularization with L^1-fidelity term in the case of impulsive noise. We present numerical results which are in good agreement with the predictions of our analysis
    corecore