234 research outputs found

    Fast nonnegative least squares through flexible Krylov subspaces

    Get PDF
    Constrained least squares problems arise in a variety of applications, and many iterative methods are already available to compute their solutions. This paper proposes a new efficient approach to solve nonnegative linear least squares problems. The associated KKT conditions are leveraged to form an adaptively preconditioned linear system, which is then solved by a flexible Krylov subspace method. The new method can be easily applied to image reconstruction problems affected by both Gaussian and Poisson noise, where the components of the solution represent nonnegative intensities. {Theoretical insight is given, and} numerical experiments and comparisons are displayed in order to validate the new method, which delivers results of equal or better quality than many state-of-the-art methods for nonnegative least squares solvers, with a significant speedup

    Flexible Krylov Methods for <i>L</i><sub>p</sub> regularization

    Get PDF
    In this paper we develop flexible Krylov methods for efficiently computing regularized solutions to large-scale linear inverse problems with an `2 fit-to-data term and an `p penalization term, for p ≥ 1. First we approximate the p-norm penalization term as a sequence of 2-norm penalization terms using adaptive regularization matrices in an iterative reweighted norm fashion, and then we exploit flexible preconditioning techniques to efficiently incorporate the weight updates. To handle general (nonsquare) `p-regularized least-squares problems, we introduce a flexible Golub–Kahan approach and exploit it within a Krylov–Tikhonov hybrid framework. Furthermore, we show that both the flexible Golub–Kahan and the flexible Arnoldi approaches for p = 1 can be used to efficiently compute solutions that are sparse with respect to some transformations. The key benefits of our approach compared to existing optimization methods for `p regularization are that inner-outer iterationschemes are replaced by efficient projection methods on linear subspaces of increasing dimension and that expensive regularization parameter selection techniques can be avoided. Theoretical insights are provided, and numerical results from image deblurring and tomographic reconstruction illustrate the benefits of this approach, compared to well-established methods

    Fixing Nonconvergence of Algebraic Iterative Reconstruction with an Unmatched Backprojector

    Get PDF
    We consider algebraic iterative reconstruction methods with applications in image reconstruction. In particular, we are concerned with methods based on an unmatched projector/backprojector pair; i.e., the backprojector is not the exact adjoint or transpose of the forward projector. Such situations are common in large-scale computed tomography, and we consider the common situation where the method does not converge due to the nonsymmetry of the iteration matrix. We propose a modified algorithm that incorporates a small shift parameter, and we give the conditions that guarantee convergence of this method to a fixed point of a slightly perturbed problem. We also give perturbation bounds for this fixed point. Moreover, we discuss how to use Krylov subspace methods to efficiently estimate the leftmost eigenvalue of a certain matrix to select a proper shift parameter. The modified algorithm is illustrated with test problems from computed tomography

    Regularization techniques based on Krylov subspace methods for ill-posed linear systems

    Get PDF
    This thesis is focussed on the regularization of large-scale linear discrete ill-posed problems. Problems of this kind arise in a variety of applications, and, in a continuous setting, they are often formulated as Fredholm integral equations of the first kind, with smooth kernel, modeling an inverse problem (i.e., the unknown of these equations is the cause of an observed effect). Upon discretization, linear systems whose coefficient matrix is ill-conditioned and whose right-hand side vector is affected by some perturbations (noise) must be solved. %Because of the ill-conditioning of the system matrix and the errors in the data, In this setting, a straightforward solution of the available linear system is meaningless because the computed solution would be dominated by errors; moreover, for large-scale problems, solving directly the available system could be computationally infeasible. Therefore, in order to recover a meaningful approximation of the original solution, some regularization must be employed, i.e., the original linear system must be replaced by a nearby problem having better numerical properties. The first part of this thesis (Chapter 1) gives an overview on inverse problems and briefly describes their properties in the continuous setting; then, in a discrete setting, the most well-known regularization techniques relying on some factorization of the system matrix are surveyed. The remaining part of the thesis is concerned with iterative regularization strategies based on some Krylov subspaces methods, which are well-suited for large-scale problems. More precisely, in Chapter 2, an extensive overview of the Krylov subspace methods most successfully employed with regularizing purposes is presented: historically, the first methods to be used were related to the normal equations and many issues linked to the analysis of their behavior have already been addressed. The situation is different for the methods based on the Arnoldi algorithm, whose regularizing properties are not well understood or widely accepted, yet. Therefore, still in Chapter 2, a novel analysis of the approximation properties of the Arnoldi algorithm when employed to solve linear discrete ill-posed problems is presented, in order to provide some insight on the use of Arnoldi-based methods for regularization purposes. The core results of this thesis are related to class of the Arnoldi-Tikhonov methods, first introduced about ten years ago, and described in Chapter 3. The Arnoldi-Tikhonov approach to regularization consists in solving a Tikhonov-regularized problem by means of an iterative strategy based on the Arnoldi algorithm. With respect to a purely iterative approach to regularization, Arnoldi-Tikhonov methods can deliver more accurate approximations by easily incorporating some information about the behavior of the solution within the reconstruction process. In connection with Arnoldi-Tikhonov methods, many open questions still remain, the most significant ones being the choice of the regularization parameters and the choice of the regularization matrices. The first issues are addressed in Chapter 4, where two new efficient and original parameter selection strategies to be employed with the Arnoldi-Tikhonov methods are derived and extensively tested; still in Chapter 4, a novel extension of the Arnoldi-Tikhonov method to the multi-parameter Tikhonov regularization case is described. Finally, in Chapter 5, two efficient and innovative schemes to approximate the solution of nonlinear regularized problems are presented: more precisely, the regularization terms originally defined by the 1-norm or by the Total Variation functional are approximated by adaptively updating suitable regularization matrices within the Arnoldi-Tikhonov iterations. Along this thesis, the results of many numerical experiments are presented in order to show the performance of the newly proposed methods, and to compare them with the already existing strategies

    7. Minisymposium on Gauss-type Quadrature Rules: Theory and Applications

    Get PDF
    • …
    corecore