1,345 research outputs found

    Robust Dropping Criteria for F-norm Minimization Based Sparse Approximate Inverse Preconditioning

    Full text link
    Dropping tolerance criteria play a central role in Sparse Approximate Inverse preconditioning. Such criteria have received, however, little attention and have been treated heuristically in the following manner: If the size of an entry is below some empirically small positive quantity, then it is set to zero. The meaning of "small" is vague and has not been considered rigorously. It has not been clear how dropping tolerances affect the quality and effectiveness of a preconditioner MM. In this paper, we focus on the adaptive Power Sparse Approximate Inverse algorithm and establish a mathematical theory on robust selection criteria for dropping tolerances. Using the theory, we derive an adaptive dropping criterion that is used to drop entries of small magnitude dynamically during the setup process of MM. The proposed criterion enables us to make MM both as sparse as possible as well as to be of comparable quality to the potentially denser matrix which is obtained without dropping. As a byproduct, the theory applies to static F-norm minimization based preconditioning procedures, and a similar dropping criterion is given that can be used to sparsify a matrix after it has been computed by a static sparse approximate inverse procedure. In contrast to the adaptive procedure, dropping in the static procedure does not reduce the setup time of the matrix but makes the application of the sparser MM for Krylov iterations cheaper. Numerical experiments reported confirm the theory and illustrate the robustness and effectiveness of the dropping criteria.Comment: 27 pages, 2 figure

    Sparse approximate inverse preconditioners on high performance GPU platforms

    Get PDF
    Simulation with models based on partial differential equations often requires the solution of (sequences of) large and sparse algebraic linear systems. In multidimensional domains, preconditioned Krylov iterative solvers are often appropriate for these duties. Therefore, the search for efficient preconditioners for Krylov subspace methods is a crucial theme. Recent developments, especially in computing hardware, have renewed the interest in approximate inverse preconditioners in factorized form, because their application during the solution process can be more efficient. We present here some experiences focused on the approximate inverse preconditioners proposed by Benzi and Tůma from 1996 and the sparsification and inversion proposed by van Duin in 1999. Computational costs, reorderings and implementation issues are considered both on conventional and innovative computing architectures like Graphics Programming Units (GPUs)
    • …
    corecore