206 research outputs found

    Preconditioned iterative methods for Navier-Stokes control problems

    Get PDF
    PDE-constrained optimization problems are a class of problems which have attracted much recent attention in scientific computing and applied science. In this paper, we discuss preconditioned iterative methods for a class of Navier-Stokes control problems, one of the main problems of this type in the field of fluid dynamics. Having detailed the Oseen-type iteration we use to solve the problems and derived the structure of the matrix system to be solved at each step, we utilize the theory of saddle point systems to develop efficient preconditioned iterative solution techniques for these problems. We also require theory of solving convection-diffusion control problems, as well as a commutator argument to justify one of the components of the preconditioner

    A general class of arbitrary order iterative methods for computing generalized inverses

    Full text link
    [EN] A family of iterative schemes for approximating the inverse and generalized inverse of a complex matrix is designed, having arbitrary order of convergence p. For each p, a class of iterative schemes appears, for which we analyze those elements able to converge with very far initial estimations. This class generalizes many known iterative methods which are obtained for particular values of the parameters. The order of convergence is stated in each case, depending on the first non-zero parameter. For different examples, the accessibility of some schemes, that is, the set of initial estimations leading to convergence, is analyzed in order to select those with wider sets. This wideness is related with the value of the first non-zero value of the parameters defining the method. Later on, some numerical examples (academic and also from signal processing) are provided to confirm the theoretical results and to show the feasibility and effectiveness of the new methods. (C) 2021 The Authors. Published by Elsevier Inc.This research was supported in part by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE) and in part by VIE from Instituto Tecnologico de Costa Rica (Research #1440037)Cordero Barbero, A.; Soto-Quiros, P.; Torregrosa Sánchez, JR. (2021). A general class of arbitrary order iterative methods for computing generalized inverses. Applied Mathematics and Computation. 409:1-18. https://doi.org/10.1016/j.amc.2021.126381S11840

    Preconditioning for Sparse Linear Systems at the Dawn of the 21st Century: History, Current Developments, and Future Perspectives

    Get PDF
    Iterative methods are currently the solvers of choice for large sparse linear systems of equations. However, it is well known that the key factor for accelerating, or even allowing for, convergence is the preconditioner. The research on preconditioning techniques has characterized the last two decades. Nowadays, there are a number of different options to be considered when choosing the most appropriate preconditioner for the specific problem at hand. The present work provides an overview of the most popular algorithms available today, emphasizing the respective merits and limitations. The overview is restricted to algebraic preconditioners, that is, general-purpose algorithms requiring the knowledge of the system matrix only, independently of the specific problem it arises from. Along with the traditional distinction between incomplete factorizations and approximate inverses, the most recent developments are considered, including the scalable multigrid and parallel approaches which represent the current frontier of research. A separate section devoted to saddle-point problems, which arise in many different applications, closes the paper

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a κ\kappa-conditioned problem in O(κ)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(κ1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(κ)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT

    Preconditioned fast solvers for large linear systems with specific sparse and/or Toeplitz-like structures and applications

    Get PDF
    In this thesis, the design of the preconditioners we propose starts from applications instead of treating the problem in a completely general way. The reason is that not all types of linear systems can be addressed with the same tools. In this sense, the techniques for designing efficient iterative solvers depends mostly on properties inherited from the continuous problem, that has originated the discretized sequence of matrices. Classical examples are locality, isotropy in the PDE context, whose discrete counterparts are sparsity and matrices constant along the diagonals, respectively. Therefore, it is often important to take into account the properties of the originating continuous model for obtaining better performances and for providing an accurate convergence analysis. We consider linear systems that arise in the solution of both linear and nonlinear partial differential equation of both integer and fractional type. For the latter case, an introduction to both the theory and the numerical treatment is given. All the algorithms and the strategies presented in this thesis are developed having in mind their parallel implementation. In particular, we consider the processor-co-processor framework, in which the main part of the computation is performed on a Graphics Processing Unit (GPU) accelerator. In Part I we introduce our proposal for sparse approximate inverse preconditioners for either the solution of time-dependent Partial Differential Equations (PDEs), Chapter 3, and Fractional Differential Equations (FDEs), containing both classical and fractional terms, Chapter 5. More precisely, we propose a new technique for updating preconditioners for dealing with sequences of linear systems for PDEs and FDEs, that can be used also to compute matrix functions of large matrices via quadrature formula in Chapter 4 and for optimal control of FDEs in Chapter 6. At last, in Part II, we consider structured preconditioners for quasi-Toeplitz systems. The focus is towards the numerical treatment of discretized convection-diffusion equations in Chapter 7 and on the solution of FDEs with linear multistep formula in boundary value form in Chapter 8

    Preconditioned fast solvers for large linear systems with specific sparse and/or Toeplitz-like structures and applications

    Get PDF
    In this thesis, the design of the preconditioners we propose starts from applications instead of treating the problem in a completely general way. The reason is that not all types of linear systems can be addressed with the same tools. In this sense, the techniques for designing efficient iterative solvers depends mostly on properties inherited from the continuous problem, that has originated the discretized sequence of matrices. Classical examples are locality, isotropy in the PDE context, whose discrete counterparts are sparsity and matrices constant along the diagonals, respectively. Therefore, it is often important to take into account the properties of the originating continuous model for obtaining better performances and for providing an accurate convergence analysis. We consider linear systems that arise in the solution of both linear and nonlinear partial differential equation of both integer and fractional type. For the latter case, an introduction to both the theory and the numerical treatment is given. All the algorithms and the strategies presented in this thesis are developed having in mind their parallel implementation. In particular, we consider the processor-co-processor framework, in which the main part of the computation is performed on a Graphics Processing Unit (GPU) accelerator. In Part I we introduce our proposal for sparse approximate inverse preconditioners for either the solution of time-dependent Partial Differential Equations (PDEs), Chapter 3, and Fractional Differential Equations (FDEs), containing both classical and fractional terms, Chapter 5. More precisely, we propose a new technique for updating preconditioners for dealing with sequences of linear systems for PDEs and FDEs, that can be used also to compute matrix functions of large matrices via quadrature formula in Chapter 4 and for optimal control of FDEs in Chapter 6. At last, in Part II, we consider structured preconditioners for quasi-Toeplitz systems. The focus is towards the numerical treatment of discretized convection-diffusion equations in Chapter 7 and on the solution of FDEs with linear multistep formula in boundary value form in Chapter 8

    Preconditioners for Krylov subspace methods: An overview

    Get PDF
    When simulating a mechanism from science or engineering, or an industrial process, one is frequently required to construct a mathematical model, and then resolve this model numerically. If accurate numerical solutions are necessary or desirable, this can involve solving large-scale systems of equations. One major class of solution methods is that of preconditioned iterative methods, involving preconditioners which are computationally cheap to apply while also capturing information contained in the linear system. In this article, we give a short survey of the field of preconditioning. We introduce a range of preconditioners for partial differential equations, followed by optimization problems, before discussing preconditioners constructed with less standard objectives in mind

    Differentiable Gaussianization Layers for Inverse Problems Regularized by Deep Generative Models

    Full text link
    Deep generative models such as GANs and normalizing flows are powerful priors. They can regularize inverse problems to reduce ill-posedness and attain high-quality results. However, the latent vector of such deep generative models can fall out of the desired high-dimensional standard Gaussian distribution during an inversion, particularly in the presence of noise in data or inaccurate forward models. In such a case, deep generative models are ineffective in attaining high-fidelity solutions. To address this issue, we propose to reparameterize and Gaussianize the latent vector using novel differentiable data-dependent layers wherein custom operators are defined by solving optimization problems. These proposed layers constrain an inversion to find feasible in-distribution solutions. We tested and validated our technique on three inversion tasks: compressive-sensing MRI, image deblurring, and eikonal tomography (a nonlinear PDE-constrained inverse problem), using two representative deep generative models: StyleGAN2 and Glow, and achieved state-of-the-art results.Comment: 26 pages, 15 figures, 9 table
    corecore