A recently proposed linear-scaling scheme for density-functional
pseudopotential calculations is described in detail. The method is based on a
formulation of density functional theory in which the ground state energy is
determined by minimization with respect to the density matrix, subject to the
condition that the eigenvalues of the latter lie in the range [0,1].
Linear-scaling behavior is achieved by requiring that the density matrix should
vanish when the separation of its arguments exceeds a chosen cutoff. The
limitation on the eigenvalue range is imposed by the method of Li, Nunes and
Vanderbilt. The scheme is implemented by calculating all terms in the energy on
a uniform real-space grid, and minimization is performed using the
conjugate-gradient method. Tests on a 512-atom Si system show that the total
energy converges rapidly as the range of the density matrix is increased. A
discussion of the relation between the present method and other linear-scaling
methods is given, and some problems that still require solution are indicated.Comment: REVTeX file, 27 pages with 4 uuencoded postscript figure