research

Modified conjugated gradient method for diagonalising large matrices

Abstract

We present an iterative method to diagonalise large matrices. The basic idea is the same as the conjugated gradient (CG) method, i.e, minimizing the Rayleigh quotient via its gradient and avoiding reintroduce errors to the directions of previous gradients. Each iteration step is to find lowest eigenvector of the matrix in a subspace spanned by the current trial vector and the corresponding gradient of the Rayleigh quotient, as well as some previous trial vectors. The gradient, together with the previous trail vectors, play a similar role of the conjugated gradient of the original CG algorithm. Our numeric tests indicate that this method converges significantly faster than the original CG method. And the computational cost of one iteration step is about the same as the original CG method. It is suitably for first principle calculations.Comment: 6 Pages, 2EPS figures. (To appear in Phys. Rev. E

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 02/01/2020