5 research outputs found

    Solveurs linéaires scalables basés sur des sous--espaces de Krylov Élargis avec réduction dynamique des directions de recherche

    Get PDF
    Krylov methods are widely used for solving large sparse linear systems of equations.On distributed architectures, their performance is limited by the communication needed at eachiteration of the algorithm. In this paper, we study the use of so-called enlarged Krylov subspacesfor reducing the number of iterations, and therefore the overall communication, of Krylov methods.In particular, we consider a reformulation of the Conjugate Gradient method using these enlargedKrylov subspaces: the enlarged Conjugate Gradient method.We present the parallel design of two variants of the enlarged Conjugate Gradient method aswell as their corresponding dynamic versions where the number of search directions is dynamicallyreduced during the iterations. For a linear elasticity problem with heterogeneous coecients, usinga block Jacobi preconditioner, we show that this implementation scales up to 16; 384 cores, and isup to 5.7 times faster than the PETSc implementation of PCG.Les méthodes de Krylov sont largement utilisées pour résoudre les systèmes linéaires creux et de grande taille. Sur une architecture distribuée, leur performance est limitée par les communications nécessaires à chaque itération de l'algorithme. Dans ce papier, on étudie l'usage de sous{espaces de Krylov élargis pour réduire le nombre d'itérations, et ainsi le total des communications, des méthodes de Krylov. En particulier, on considère une reformulation de la méthode du Gradient Conjugué qui utilise ces sous{espaces de Krylov élargis :le Gradient Conjugué élargi.On présente le design parallèle de deux variantes de la méthode du Gradient Conjugué élargi ainsi que les versions dynamiques associées, où le nombre de directions de recherche est réduit dynamiquement pendant les itérations. Pour un problème d'élasticité linéaire avec des coefficients hétérogènes, en utilisant un preconditioneur de type Jacobi par bloc, on montre que cette implémentation passe à l'echelle jusquà 16; 384 coeurs, et est jusqu'à 5; 7 fois plus rapide que l'implémentation de PCG présente dans PETSc

    Scalable Linear Solvers Based on Enlarged Krylov Subspaces with Dynamic Reduction of Search Directions

    Get PDF
    International audienceKrylov methods are widely used for solving large sparse linear systems of equations. On distributed architectures, their performance is limited by the communication needed at each iteration of the algorithm. In this paper, we study the use of so-called enlarged Krylov subspaces for reducing the number of iterations, and therefore the overall communication, of Krylov methods. In particular, we consider a reformulation of the conjugate gradient method using these enlarged Krylov subspaces: the enlarged conjugate gradient method. We present the parallel design of two variants of the enlarged conjugate gradient method, as well as their corresponding dynamic versions, where the number of search directions is dynamically reduced during the iterations. For a linear elasticity problem with heterogeneous coefficients, using a block Jacobi preconditioner, we show that this implementation scales up to 16,384 cores and is up to 6.9 times faster than the PETSc implementation of PCG

    An efficient method for constructing an ILU preconditioner for solving large sparse nonsymmetric linear systems by the GMRES method

    Get PDF
    AbstractThe main idea of this paper is in determination of the pattern of nonzero elements of the LU factors of a given matrix A. The idea is based on taking the powers of the Boolean matrix derived from A. This powers of a Boolean matrix strategy (PBS) is an efficient, effective, and inexpensive approach. Construction of an ILU preconditioner using PBS is described and used in solving large nonsymmetric sparse linear systems. Effectiveness of the proposed ILU preconditioner in solving large nonsymmetric sparse linear systems by the GMRES method is also shown. Numerical experiments are performed which show that it is possible to considerably reduce the number of GMRES iterations when the ILU preconditioner constructed here is used. In numerical examples, the influence of k, the dimension of the Krylov subspace, on the performance of the GMRES method using an ILU preconditioner is tested. For all the tests carried out, the best value for k is found to be 10

    Lanczos-type solvers for nonsymmetric linear systems of equations

    Get PDF
    Among the iterative methods for solving large linear systems with a sparse (or, possibly, structured) nonsymmetric matrix, those that are based on the Lanczos process feature short recurrences for the generation of the Krylov space. This means low cost and low memory requirement. This review article introduces the reader not only to the basic forms of the Lanczos process and some of the related theory, but also describes in detail a number of solvers that are based on it, including those that are considered to be the most efficient ones. Possible breakdowns of the algorithms and ways to cure them by look-ahead are also discusse
    corecore