14,972 research outputs found

    Preconditioned Minimal Residual Methods for Chebyshev Spectral Caluclations

    Get PDF
    The problem of preconditioning the pseudospectral Chebyshev approximation of an elliptic operator is considered. The numerical sensitiveness to variations of the coefficients of the operator are investigated for two classes of preconditioning matrices: one arising from finite differences, the other from finite elements. The preconditioned system is solved by a conjugate gradient type method, and by a DuFort-Frankel method with dynamical parameters. The methods are compared on some test problems with the Richardson method and with the minimal residual Richardson method

    A framework for deflated and augmented Krylov subspace methods

    Get PDF
    We consider deflation and augmentation techniques for accelerating the convergence of Krylov subspace methods for the solution of nonsingular linear algebraic systems. Despite some formal similarity, the two techniques are conceptually different from preconditioning. Deflation (in the sense the term is used here) "removes" certain parts from the operator making it singular, while augmentation adds a subspace to the Krylov subspace (often the one that is generated by the singular operator); in contrast, preconditioning changes the spectrum of the operator without making it singular. Deflation and augmentation have been used in a variety of methods and settings. Typically, deflation is combined with augmentation to compensate for the singularity of the operator, but both techniques can be applied separately. We introduce a framework of Krylov subspace methods that satisfy a Galerkin condition. It includes the families of orthogonal residual (OR) and minimal residual (MR) methods. We show that in this framework augmentation can be achieved either explicitly or, equivalently, implicitly by projecting the residuals appropriately and correcting the approximate solutions in a final step. We study conditions for a breakdown of the deflated methods, and we show several possibilities to avoid such breakdowns for the deflated MINRES method. Numerical experiments illustrate properties of different variants of deflated MINRES analyzed in this paper.Comment: 24 pages, 3 figure

    Improving Inversions of the Overlap Operator

    Full text link
    We present relaxation and preconditioning techniques which accelerate the inversion of the overlap operator by a factor of four on small lattices, with larger gains as the lattice size increases. These improvements can be used in both propagator calculations and dynamical simulations.Comment: lattice2004(machines

    Improving the dynamical overlap algorithm

    Full text link
    We present algorithmic improvements to the overlap Hybrid Monte Carlo algorithm, including preconditioning techniques and improvements to the correction step, used when one of the eigenvalues of the Kernel operator changes sign, which is now O(\Delta t^2) exact.Comment: 6 pages, 3 figures; poster contribution at Lattice 2005(Algorithms and machines

    Multi-step derivative-free preconditioned Newton method for solving systems of nonlinear equations

    Get PDF
    Preconditioning of systems of nonlinear equations modifies the associated Jacobian and provides rapid convergence. The preconditioners are introduced in a way that they do not affect the convergence order of parent iterative method. The multi-step derivative-free iterative method consists of a base method and multi-step part. In the base method, the Jacobian of the system of nonlinear equation is approximated by finite difference operator and preconditioners add an extra term to modify it. The inversion of modified finite difference operator is avoided by computing LU factors. Once we have LU factors, we repeatedly use them to solve lower and upper triangular systems in the multi-step part to enhance the convergence order. The convergence order of m-step Newton iterative method is m + 1. The claimed convergence orders are verified by computing the computational order of convergence and numerical simulations clearly show that the good selection of preconditioning provides numerical stability, accuracy and rapid convergence.Peer ReviewedPostprint (author's final draft

    Approximate polynomial preconditioning applied to biharmonic equations on vector supercomputers

    Get PDF
    Applying a finite difference approximation to a biharmonic equation results in a very ill-conditioned system of equations. This paper examines the conjugate gradient method used in conjunction with the generalized and approximate polynomial preconditionings for solving such linear systems. An approximate polynomial preconditioning is introduced, and is shown to be more efficient than the generalized polynomial preconditionings. This new technique provides a simple but effective preconditioning polynomial, which is based on another coefficient matrix rather than the original matrix operator as commonly used
    corecore