71 research outputs found

    Nonlinear CG-like iterative methods

    Get PDF
    AbstractA nonlinear conjugate gradient method has been introduced and analyzed by J.W. Daniel. This method applies to nonlinear operators with symmetric Jacobians. Orthomin(1) is an iterative method which applies to nonsymmetric and definite linear systems. In this article we generalize Orthomin(1) to a method which applies directly to nonlinear operator equations. Each iteration of the new method requires the solution of a scalar nonlinear equation. Under conditions that the Hessian is uniformly bounded away from zero and the Jacobian is uniformly positive definite the new method is proved to converge to a globally unique solution. Error bounds and local convergence results are also obtained. Numerical experiments on solving nonlinear operator equations arising in the discretization of nonlinear elliptic partial differential equations are presented

    Breakdowns in the implementation of the Lánczos method for solving linear systems

    Get PDF
    AbstractThe Lánczos method for solving systems of linear equations is based on formal orthogonal polynomials. Its implementation is realized via some recurrence relationships between polynomials of a family of orthogonal polynomials or between those of two adjacent families of orthogonal polynomials. A division by zero can occur in such recurrence relations, thus causing a breakdown in the algorithm which has to be stopped. In this paper, two types of breakdowns are discussed. The true breakdowns which are due to the nonexistence of some polynomials and the ghost breakdowns which are due to the recurrence relationship used. Among all the recurrence relationships which can be used and all the algorithms for implementing the Lánczos method which came out from them, the only reliable algorithm is Lánczos/Orthodir which can only suffer from true breakdowns. It is shown how to avoid true breakdowns in this algorithm. Other algorithms are also discussed and the case of near-breakdown is treated. The same treatment applies to other methods related to Lánczos'

    Réduction des coûts de communication et de calcul du Gradient Conjugué dans les sous-espaces de Krylov Élargi

    Get PDF
    In this paper we propose an algebraic method in order to reduce dynamically the number of search directions during block Conjugate Gradient iterations. Indeed, by monitoring the rank of the optimal step α k it is possible to detect inexact breakdowns and remove the corresponding search directions. We also propose an algebraic criterion that ensures in theory the equivalence between our method with dynamic reduction of the search directions and the classical block Conjugate Gradient. Numerical experiments show that the method is both stable, the number of iterations with or without reduction is of the same order, and effective, the search space is significantly reduced. We use this approach in the context of enlarged Krylov subspace methods which reduce communication when implemented on large scale machines. The reduction of the number of search directions further reduces the computation cost and the memory usage of those methods.Dans ce papier, nous proposons une méthode algébrique pour réduire dynamiquement le nombre de directions de recherche pendant les itérations du Gradient Conjugué par bloc. En effet, en mesurant la perte de rang numérique du pas optimal α k, il est possible d'enlever les directions de recherche superflues. Nous proposons aussi un critère algébrique qui assure en théorie l'équivalence entre notre méthode avec réduction dynamique des directions de recherche et le Gradient Conjugué par bloc classique. Les résultats numériques obtenus montrent que la méthode est à la fois stable, le nombre d'itérations est du même ordre avec ou sans la réduction, et efficace, l'espace de recherche est significativement réduit. Nous utilisons cette approche dans le contexte des méthodes de Krylov élargis qui réduisent les communications lorsqu'elles sont utilisées sur des machines parallèle à grande échelle. La réduction du nombre de directions de recherche réduit encore plus le coût de calcul et l'occupation mémoire de ces méthodes

    Linear iterative solvers for implicit ODE methods

    Get PDF
    The numerical solution of stiff initial value problems, which lead to the problem of solving large systems of mildly nonlinear equations are considered. For many problems derived from engineering and science, a solution is possible only with methods derived from iterative linear equation solvers. A common approach to solving the nonlinear equations is to employ an approximate solution obtained from an explicit method. The error is examined to determine how it is distributed among the stiff and non-stiff components, which bears on the choice of an iterative method. The conclusion is that error is (roughly) uniformly distributed, a fact that suggests the Chebyshev method (and the accompanying Manteuffel adaptive parameter algorithm). This method is described, also commenting on Richardson's method and its advantages for large problems. Richardson's method and the Chebyshev method with the Mantueffel algorithm are applied to the solution of the nonlinear equations by Newton's method

    Scalable Linear Solvers Based on Enlarged Krylov Subspaces with Dynamic Reduction of Search Directions

    Get PDF
    International audienceKrylov methods are widely used for solving large sparse linear systems of equations. On distributed architectures, their performance is limited by the communication needed at each iteration of the algorithm. In this paper, we study the use of so-called enlarged Krylov subspaces for reducing the number of iterations, and therefore the overall communication, of Krylov methods. In particular, we consider a reformulation of the conjugate gradient method using these enlarged Krylov subspaces: the enlarged conjugate gradient method. We present the parallel design of two variants of the enlarged conjugate gradient method, as well as their corresponding dynamic versions, where the number of search directions is dynamically reduced during the iterations. For a linear elasticity problem with heterogeneous coefficients, using a block Jacobi preconditioner, we show that this implementation scales up to 16,384 cores and is up to 6.9 times faster than the PETSc implementation of PCG

    Solveurs linéaires scalables basés sur des sous--espaces de Krylov Élargis avec réduction dynamique des directions de recherche

    Get PDF
    Krylov methods are widely used for solving large sparse linear systems of equations.On distributed architectures, their performance is limited by the communication needed at eachiteration of the algorithm. In this paper, we study the use of so-called enlarged Krylov subspacesfor reducing the number of iterations, and therefore the overall communication, of Krylov methods.In particular, we consider a reformulation of the Conjugate Gradient method using these enlargedKrylov subspaces: the enlarged Conjugate Gradient method.We present the parallel design of two variants of the enlarged Conjugate Gradient method aswell as their corresponding dynamic versions where the number of search directions is dynamicallyreduced during the iterations. For a linear elasticity problem with heterogeneous coecients, usinga block Jacobi preconditioner, we show that this implementation scales up to 16; 384 cores, and isup to 5.7 times faster than the PETSc implementation of PCG.Les méthodes de Krylov sont largement utilisées pour résoudre les systèmes linéaires creux et de grande taille. Sur une architecture distribuée, leur performance est limitée par les communications nécessaires à chaque itération de l'algorithme. Dans ce papier, on étudie l'usage de sous{espaces de Krylov élargis pour réduire le nombre d'itérations, et ainsi le total des communications, des méthodes de Krylov. En particulier, on considère une reformulation de la méthode du Gradient Conjugué qui utilise ces sous{espaces de Krylov élargis :le Gradient Conjugué élargi.On présente le design parallèle de deux variantes de la méthode du Gradient Conjugué élargi ainsi que les versions dynamiques associées, où le nombre de directions de recherche est réduit dynamiquement pendant les itérations. Pour un problème d'élasticité linéaire avec des coefficients hétérogènes, en utilisant un preconditioneur de type Jacobi par bloc, on montre que cette implémentation passe à l'echelle jusquà 16; 384 coeurs, et est jusqu'à 5; 7 fois plus rapide que l'implémentation de PCG présente dans PETSc

    Closer to the solutions: iterative linear solvers

    Get PDF
    The solution of dense linear systems received much attention after the second world war, and by the end of the sixties, most of the problems associated with it had been solved. For a long time, Wilkinson's \The Algebraic Eigenvalue Problem" [107], other than the title suggests, became also the standard textbook for the solution of linear systems. When it became clear that partial dierential equations could be solved numerically, to a level of accuracy that was of interest for application areas (such as reservoir engineering, and reactor diusion modeling), there was a strong need for the fast solution of the discretized systems, and iterative methods became popular for these problems
    corecore