19 research outputs found
Symmetrization Techniques in Image Deblurring
This paper presents a couple of preconditioning techniques that can be used
to enhance the performance of iterative regularization methods applied to image
deblurring problems with a variety of point spread functions (PSFs) and
boundary conditions. More precisely, we first consider the anti-identity
preconditioner, which symmetrizes the coefficient matrix associated to problems
with zero boundary conditions, allowing the use of MINRES as a regularization
method. When considering more sophisticated boundary conditions and strongly
nonsymmetric PSFs, the anti-identity preconditioner improves the performance of
GMRES. We then consider both stationary and iteration-dependent regularizing
circulant preconditioners that, applied in connection with the anti-identity
matrix and both standard and flexible Krylov subspaces, speed up the
iterations. A theoretical result about the clustering of the eigenvalues of the
preconditioned matrices is proved in a special case. The results of many
numerical experiments are reported to show the effectiveness of the new
preconditioning techniques, including when considering the deblurring of sparse
images
Some transpose-free CG-like solvers for nonsymmetric ill-posed problems
2siThis paper introduces and analyzes an original class of Krylov subspace methods that provide an efficient alternative to many well-known conjugate-gradient-like (CG-like) Krylov solvers for square nonsymmetric linear systems arising from discretizations of inverse ill-posed problems. The main idea underlying the new methods is to consider some rank-deficient approximations of the transpose of the system matrix, obtained by running the (transpose-free) Arnoldi algorithm, and then apply some Krylov solvers to a formally right-preconditioned system of equations. Theoretical insight is given, and many numerical tests show that the new solvers outperform classical Arnoldi-based or CG-like methods in a variety of situations.openembargoed_20210328Gazzola S.; Novati P.Gazzola, S.; Novati, P
Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications
Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging.
In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method.
In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big.
We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term.
For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data.
For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces.
Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency
Tikhonov-type iterative regularization methods for ill-posed inverse problems: theoretical aspects and applications
Ill-posed inverse problems arise in many fields of science and engineering. The ill-conditioning and the big dimension make the task of numerically solving this kind of problems very challenging.
In this thesis we construct several algorithms for solving ill-posed inverse problems. Starting from the classical Tikhonov regularization method we develop iterative methods that enhance the performances of the originating method.
In order to ensure the accuracy of the constructed algorithms we insert a priori knowledge on the exact solution and empower the regularization term. By exploiting the structure of the problem we are also able to achieve fast computation even when the size of the problem becomes very big.
We construct algorithms that enforce constraint on the reconstruction, like nonnegativity or flux conservation and exploit enhanced version of the Euclidian norm using a regularization operator and different semi-norms, like the Total Variaton, for the regularization term.
For most of the proposed algorithms we provide efficient strategies for the choice of the regularization parameters, which, most of the times, rely on the knowledge of the norm of the noise that corrupts the data.
For each method we analyze the theoretical properties in the finite dimensional case or in the more general case of Hilbert spaces.
Numerical examples prove the good performances of the algorithms proposed in term of both accuracy and efficiency
Arnoldi decomposition, GMRES, and preconditioning for linear discrete ill-posed problems
GMRES is one of the most popular iterative methods for the solution of large
linear systems of equations that arise from the discretization of linear
well-posed problems, such as Dirichlet boundary value problems for elliptic
partial differential equations. The method is also applied to iteratively solve
linear systems of equations that are obtained by discretizing linear ill-posed
problems, such as many inverse problems. However, GMRES does not always perform
well when applied to the latter kind of problems. This paper seeks to shed some
light on reasons for the poor performance of GMRES in certain situations, and
discusses some remedies based on specific kinds of preconditioning. The
standard implementation of GMRES is based on the Arnoldi process, which also
can be used to define a solution subspace for Tikhonov or TSVD regularization,
giving rise to the Arnoldi-Tikhonov and Arnoldi-TSVD methods, respectively. The
performance of the GMRES, the Arnoldi-Tikhonov, and the Arnoldi-TSVD methods is
discussed. Numerical examples illustrate properties of these methods