58 research outputs found

    Periodic Pulay method for robust and efficient convergence acceleration of self-consistent field iterations

    Full text link
    Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. We demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.Comment: Version 2 (with minor edits from version 1

    A Fast Anderson-Chebyshev Acceleration for Nonlinear Optimization

    Full text link
    Anderson acceleration (or Anderson mixing) is an efficient acceleration method for fixed point iterations xt+1=G(xt)x_{t+1}=G(x_t), e.g., gradient descent can be viewed as iteratively applying the operation G(x)xαf(x)G(x) \triangleq x-\alpha\nabla f(x). It is known that Anderson acceleration is quite efficient in practice and can be viewed as an extension of Krylov subspace methods for nonlinear problems. In this paper, we show that Anderson acceleration with Chebyshev polynomial can achieve the optimal convergence rate O(κln1ϵ)O(\sqrt{\kappa}\ln\frac{1}{\epsilon}), which improves the previous result O(κln1ϵ)O(\kappa\ln\frac{1}{\epsilon}) provided by (Toth and Kelley, 2015) for quadratic functions. Moreover, we provide a convergence analysis for minimizing general nonlinear problems. Besides, if the hyperparameters (e.g., the Lipschitz smooth parameter LL) are not available, we propose a guessing algorithm for guessing them dynamically and also prove a similar convergence rate. Finally, the experimental results demonstrate that the proposed Anderson-Chebyshev acceleration method converges significantly faster than other algorithms, e.g., vanilla gradient descent (GD), Nesterov's Accelerated GD. Also, these algorithms combined with the proposed guessing algorithm (guessing the hyperparameters dynamically) achieve much better performance.Comment: To appear in AISTATS 202

    A short report on preconditioned Anderson acceleration method

    Full text link
    In this report, we present a versatile and efficient preconditioned Anderson acceleration (PAA) method for fixed-point iterations. The proposed framework offers flexibility in balancing convergence rates (linear, super-linear, or quadratic) and computational costs related to the Jacobian matrix. Our approach recovers various fixed-point iteration techniques, including Picard, Newton, and quasi-Newton iterations. The PAA method can be interpreted as employing Anderson acceleration (AA) as its own preconditioner or as an accelerator for quasi-Newton methods when their convergence is insufficient. Adaptable to a wide range of problems with differing degrees of nonlinearity and complexity, the method achieves improved convergence rates and robustness by incorporating suitable preconditioners. We test multiple preconditioning strategies on various problems and investigate a delayed update strategy for preconditioners to further reduce the computational costs

    Restarted Nonnegativity Preserving Tensor Splitting Methods via Relaxed Anderson Acceleration for Solving Multi-linear Systems

    Full text link
    Multilinear systems play an important role in scientific calculations of practical problems. In this paper, we consider a tensor splitting method with a relaxed Anderson acceleration for solving multilinear systems. The new method preserves nonnegativity for every iterative step and improves the existing ones. Furthermore, the convergence analysis of the proposed method is given. The new algorithm performs effectively for numerical experiments
    corecore