668 research outputs found

    Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications

    Get PDF
    Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging. In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method. In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big. We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term. For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data. For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces. Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency

    Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications

    Get PDF
    Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging. In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method. In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big. We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term. For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data. For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces. Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency

    SIMBA: scalable inversion in optical tomography using deep denoising priors

    Full text link
    Two features desired in a three-dimensional (3D) optical tomographic image reconstruction algorithm are the ability to reduce imaging artifacts and to do fast processing of large data volumes. Traditional iterative inversion algorithms are impractical in this context due to their heavy computational and memory requirements. We propose and experimentally validate a novel scalable iterative mini-batch algorithm (SIMBA) for fast and high-quality optical tomographic imaging. SIMBA enables highquality imaging by combining two complementary information sources: the physics of the imaging system characterized by its forward model and the imaging prior characterized by a denoising deep neural net. SIMBA easily scales to very large 3D tomographic datasets by processing only a small subset of measurements at each iteration. We establish the theoretical fixedpoint convergence of SIMBA under nonexpansive denoisers for convex data-fidelity terms. We validate SIMBA on both simulated and experimentally collected intensity diffraction tomography (IDT) datasets. Our results show that SIMBA can significantly reduce the computational burden of 3D image formation without sacrificing the imaging quality.https://arxiv.org/abs/1911.13241First author draf
    • …
    corecore