12 research outputs found
Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications
Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging.
In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method.
In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big.
We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term.
For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data.
For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces.
Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency
Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications
Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging.
In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method.
In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big.
We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term.
For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data.
For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces.
Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency
A conjugate-gradient-type rational Krylov subspace method for ill-posed problems
Conjugated gradients on the normal equation (CGNE) is a popular method to regularise linear inverse problems. The idea of the method can be summarized as minimising the residuum over a suitable Krylov subspace. It is shown that using the same idea for the shift-and-invert rational Krylov subspace yields an order-optimal regularisation scheme
Hybrid Projection Methods for Large-scale Inverse Problems with Mixed Gaussian Priors
When solving ill-posed inverse problems, a good choice of the prior is
critical for the computation of a reasonable solution. A common approach is to
include a Gaussian prior, which is defined by a mean vector and a symmetric and
positive definite covariance matrix, and to use iterative projection methods to
solve the corresponding regularized problem. However, a main challenge for many
of these iterative methods is that the prior covariance matrix must be known
and fixed (up to a constant) before starting the solution process. In this
paper, we develop hybrid projection methods for inverse problems with mixed
Gaussian priors where the prior covariance matrix is a convex combination of
matrices and the mixing parameter and the regularization parameter do not need
to be known in advance. Such scenarios may arise when data is used to generate
a sample prior covariance matrix (e.g., in data assimilation) or when different
priors are needed to capture different qualities of the solution. The proposed
hybrid methods are based on a mixed Golub-Kahan process, which is an extension
of the generalized Golub-Kahan bidiagonalization, and a distinctive feature of
the proposed approach is that both the regularization parameter and the
weighting parameter for the covariance matrix can be estimated automatically
during the iterative process. Furthermore, for problems where training data are
available, various data-driven covariance matrices (including those based on
learned covariance kernels) can be easily incorporated. Numerical examples from
tomographic reconstruction demonstrate the potential for these methods
Recommended from our members
Scalable Block Gibbs Sampling for Image Deblurring in X-Ray Radiography
Quantitative image analysis in the security sciences formulates an image deblurring problem as a Bayesian inverse problem to reduce and quantify noise and blur. We consider images of size 16 megapixels and, since each pixel represents an unknown, the dimension of the Bayesian inverse problem is on the order of 107. The large dimension poses numerical and computational difficulties for two reasons. First, Markov chain Monte Carlo (MCMC), typically used to solve a Bayesian inverse problem, is generally slow to converge in high dimensions. Second, even generating one step in a Markov chain is challenging at this size. We present a Gibbs sampler that is scalable to the large dimension required in the security sciences and its scalability is achieved in two steps. We (i) accelerate MCMC convergence by exploring banded structure in the posterior precision matrix; and (ii) use a matrix-free implementation, because constructing and storing even sparse matrices is infeasible in our target application
Iterated Tikhonov regularization with a general penalty term
Tikhonov regularization is one of the most popular approaches to solving linear discrete ill-posed problems. The choice of the regularization matrix may significantly affect the quality of the computed solution. When the regularization matrix is the identity, iterated Tikhonov regularization can yield computed approximate solutions of higher quality than (standard) Tikhonov regularization. This paper provides an analysis of iterated Tikhonov regularization with a regularization matrix different from the identity. Computed examples illustrate the performance of this method