16,024 research outputs found

    Adaptive regularization of noisy linear inverse problems

    Get PDF
    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging

    An adaptive RKHS regularization for Fredholm integral equations

    Full text link
    Regularization is a long-standing challenge for ill-posed linear inverse problems, and a prototype is the Fredholm integral equation of the first kind. We introduce a practical RKHS regularization algorithm adaptive to the discrete noisy measurement data and the underlying linear operator. This RKHS arises naturally in a variational approach, and its closure is the function space in which we can identify the true solution. We prove that the RKHS-regularized estimator has a mean-square error converging linearly as the noise scale decreases, with a multiplicative factor smaller than the commonly-used L2L^2-regularized estimator. Furthermore, numerical results demonstrate that the RKHS-regularizer significantly outperforms L2L^2-regularizer when either the noise level decays or when the observation mesh refines.Comment: 18 page

    A Threshold Regularization Method for Inverse Problems

    Get PDF
    A number of regularization methods for discrete inverse problems consist in considering weighted versions of the usual least square solution. However, these so-called filter methods are generally restricted to monotonic transformations, e.g. the Tikhonov regularization or the spectral cut-off. In this paper, we point out that in several cases, non-monotonic sequences of filters are more efficient. We study a regularization method that naturally extends the spectral cut-off procedure to non-monotonic sequences and provide several oracle inequalities, showing the method to be nearly optimal under mild assumptions. Then, we extend the method to inverse problems with noisy operator and provide efficiency results in a newly introduced conditional framework

    Image Restoration Using Joint Statistical Modeling in Space-Transform Domain

    Full text link
    This paper presents a novel strategy for high-fidelity image restoration by characterizing both local smoothness and nonlocal self-similarity of natural images in a unified statistical manner. The main contributions are three-folds. First, from the perspective of image statistics, a joint statistical modeling (JSM) in an adaptive hybrid space-transform domain is established, which offers a powerful mechanism of combining local smoothness and nonlocal self-similarity simultaneously to ensure a more reliable and robust estimation. Second, a new form of minimization functional for solving image inverse problem is formulated using JSM under regularization-based framework. Finally, in order to make JSM tractable and robust, a new Split-Bregman based algorithm is developed to efficiently solve the above severely underdetermined inverse problem associated with theoretical proof of convergence. Extensive experiments on image inpainting, image deblurring and mixed Gaussian plus salt-and-pepper noise removal applications verify the effectiveness of the proposed algorithm.Comment: 14 pages, 18 figures, 7 Tables, to be published in IEEE Transactions on Circuits System and Video Technology (TCSVT). High resolution pdf version and Code can be found at: http://idm.pku.edu.cn/staff/zhangjian/IRJSM

    Study of noise effects in electrical impedance tomography with resistor networks

    Full text link
    We present a study of the numerical solution of the two dimensional electrical impedance tomography problem, with noisy measurements of the Dirichlet to Neumann map. The inversion uses parametrizations of the conductivity on optimal grids. The grids are optimal in the sense that finite volume discretizations on them give spectrally accurate approximations of the Dirichlet to Neumann map. The approximations are Dirichlet to Neumann maps of special resistor networks, that are uniquely recoverable from the measurements. Inversion on optimal grids has been proposed and analyzed recently, but the study of noise effects on the inversion has not been carried out. In this paper we present a numerical study of both the linearized and the nonlinear inverse problem. We take three different parametrizations of the unknown conductivity, with the same number of degrees of freedom. We obtain that the parametrization induced by the inversion on optimal grids is the most efficient of the three, because it gives the smallest standard deviation of the maximum a posteriori estimates of the conductivity, uniformly in the domain. For the nonlinear problem we compute the mean and variance of the maximum a posteriori estimates of the conductivity, on optimal grids. For small noise, we obtain that the estimates are unbiased and their variance is very close to the optimal one, given by the Cramer-Rao bound. For larger noise we use regularization and quantify the trade-off between reducing the variance and introducing bias in the solution. Both the full and partial measurement setups are considered.Comment: submitted to Inverse Problems and Imagin
    • …
    corecore