587 research outputs found

    A variational Bayesian method for inverse problems with impulsive noise

    Full text link
    We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm.Comment: 20 pages, to appear in J. Comput. Phy

    Bregman Cost for Non-Gaussian Noise

    Get PDF
    One of the tasks of the Bayesian inverse problem is to find a good estimate based on the posterior probability density. The most common point estimators are the conditional mean (CM) and maximum a posteriori (MAP) estimates, which correspond to the mean and the mode of the posterior, respectively. From a theoretical point of view it has been argued that the MAP estimate is only in an asymptotic sense a Bayes estimator for the uniform cost function, while the CM estimate is a Bayes estimator for the means squared cost function. Recently, it has been proven that the MAP estimate is a proper Bayes estimator for the Bregman cost if the image is corrupted by Gaussian noise. In this work we extend this result to other noise models with log-concave likelihood density, by introducing two related Bregman cost functions for which the CM and the MAP estimates are proper Bayes estimators. Moreover, we also prove that the CM estimate outperforms the MAP estimate, when the error is measured in a certain Bregman distance, a result previously unknown also in the case of additive Gaussian noise

    Convergence Rates for Inverse Problems with Impulsive Noise

    Full text link
    We study inverse problems F(f) = g with perturbed right hand side g^{obs} corrupted by so-called impulsive noise, i.e. noise which is concentrated on a small subset of the domain of definition of g. It is well known that Tikhonov-type regularization with an L^1 data fidelity term yields significantly more accurate results than Tikhonov regularization with classical L^2 data fidelity terms for this type of noise. The purpose of this paper is to provide a convergence analysis explaining this remarkable difference in accuracy. Our error estimates significantly improve previous error estimates for Tikhonov regularization with L^1-fidelity term in the case of impulsive noise. We present numerical results which are in good agreement with the predictions of our analysis

    Convergence Rates for Exponentially Ill-Posed Inverse Problems with Impulsive Noise

    Full text link
    This paper is concerned with exponentially ill-posed operator equations with additive impulsive noise on the right hand side, i.e. the noise is large on a small part of the domain and small or zero outside. It is well known that Tikhonov regularization with an L1L^1 data fidelity term outperforms Tikhonov regularization with an L2L^2 fidelity term in this case. This effect has recently been explained and quantified for the case of finitely smoothing operators. Here we extend this analysis to the case of infinitely smoothing forward operators under standard Sobolev smoothness assumptions on the solution, i.e. exponentially ill-posed inverse problems. It turns out that high order polynomial rates of convergence in the size of the support of large noise can be achieved rather than the poor logarithmic convergence rates typical for exponentially ill-posed problems. The main tools of our analysis are Banach spaces of analytic functions and interpolation-type inequalities for such spaces. We discuss two examples, the (periodic) backwards heat equation and an inverse problem in gradiometry.Comment: to appear in SIAM J. Numer. Ana

    Image reconstruction under non-Gaussian noise

    Get PDF

    Expectation Propagation for Nonlinear Inverse Problems -- with an Application to Electrical Impedance Tomography

    Full text link
    In this paper, we study a fast approximate inference method based on expectation propagation for exploring the posterior probability distribution arising from the Bayesian formulation of nonlinear inverse problems. It is capable of efficiently delivering reliable estimates of the posterior mean and covariance, thereby providing an inverse solution together with quantified uncertainties. Some theoretical properties of the iterative algorithm are discussed, and the efficient implementation for an important class of problems of projection type is described. The method is illustrated with one typical nonlinear inverse problem, electrical impedance tomography with complete electrode model, under sparsity constraints. Numerical results for real experimental data are presented, and compared with that by Markov chain Monte Carlo. The results indicate that the method is accurate and computationally very efficient.Comment: Journal of Computational Physics, to appea

    Bayesian Estimation for Continuous-Time Sparse Stochastic Processes

    Full text link
    We consider continuous-time sparse stochastic processes from which we have only a finite number of noisy/noiseless samples. Our goal is to estimate the noiseless samples (denoising) and the signal in-between (interpolation problem). By relying on tools from the theory of splines, we derive the joint a priori distribution of the samples and show how this probability density function can be factorized. The factorization enables us to tractably implement the maximum a posteriori and minimum mean-square error (MMSE) criteria as two statistical approaches for estimating the unknowns. We compare the derived statistical methods with well-known techniques for the recovery of sparse signals, such as the â„“1\ell_1 norm and Log (â„“1\ell_1-â„“0\ell_0 relaxation) regularization methods. The simulation results show that, under certain conditions, the performance of the regularization techniques can be very close to that of the MMSE estimator.Comment: To appear in IEEE TS
    • …
    corecore