67 research outputs found

    Parallel eigensolvers in plane-wave Density Functional Theory

    Full text link
    We consider the problem of parallelizing electronic structure computations in plane-wave Density Functional Theory. Because of the limited scalability of Fourier transforms, parallelism has to be found at the eigensolver level. We show how a recently proposed algorithm based on Chebyshev polynomials can scale into the tens of thousands of processors, outperforming block conjugate gradient algorithms for large computations

    Spectral Analysis of Large Finite Element Problems by Optimization Methods

    Get PDF

    Block Iterative Eigensolvers for Sequences of Correlated Eigenvalue Problems

    Get PDF
    In Density Functional Theory simulations based on the LAPW method, each self-consistent field cycle comprises dozens of large dense generalized eigenproblems. In contrast to real-space methods, eigenpairs solving for problems at distinct cycles have either been believed to be independent or at most very loosely connected. In a recent study [7], it was demonstrated that, contrary to belief, successive eigenproblems in a sequence are strongly correlated with one another. In particular, by monitoring the subspace angles between eigenvectors of successive eigenproblems, it was shown that these angles decrease noticeably after the first few iterations and become close to collinear. This last result suggests that we can manipulate the eigenvectors, solving for a specific eigenproblem in a sequence, as an approximate solution for the following eigenproblem. In this work we present results that are in line with this intuition. We provide numerical examples where opportunely selected block iterative eigensolvers benefit from the reuse of eigenvectors by achieving a substantial speed-up. The results presented will eventually open the way to a widespread use of block iterative eigensolvers in ab initio electronic structure codes based on the LAPW approach.Comment: 12 Pages, 5 figures. Accepted for publication on Computer Physics Communication

    Tuned preconditioners for the eigensolution of large SPD matrices arising in engineering problems

    Get PDF
    In this paper, we study a class of tuned preconditioners that will be designed to accelerate both the DACG-Newton method and the implicitly restarted Lanczos method for the computation of the leftmost eigenpairs of large and sparse symmetric positive definite matrices arising in large-scale scientific computations. These tuning strategies are based on low-rank modifications of a given initial preconditioner. We present some theoretical properties of the preconditioned matrix. We experimentally show how the aforementioned methods benefit from the acceleration provided by these tuned/deflated preconditioners. Comparisons are carried out with the Jacobi-Davidson method onto matrices arising from various large realistic problems arising from finite element discretization of PDEs modeling either groundwater flow in porous media or geomechanical processes in reservoirs. The numerical results show that the Newton-based methods (which includes also the Jacobi-Davidson method) are to be preferred to the - yet efficiently implemented - implicitly restarted Lanczos method whenever a small to moderate number of eigenpairs is required. \ua9 2016 John Wiley & Sons, Ltd

    A robust adaptive algebraic multigrid linear solver for structural mechanics

    Full text link
    The numerical simulation of structural mechanics applications via finite elements usually requires the solution of large-size and ill-conditioned linear systems, especially when accurate results are sought for derived variables interpolated with lower order functions, like stress or deformation fields. Such task represents the most time-consuming kernel in commercial simulators; thus, it is of significant interest the development of robust and efficient linear solvers for such applications. In this context, direct solvers, which are based on LU factorization techniques, are often used due to their robustness and easy setup; however, they can reach only superlinear complexity, in the best case, thus, have limited applicability depending on the problem size. On the other hand, iterative solvers based on algebraic multigrid (AMG) preconditioners can reach up to linear complexity for sufficiently regular problems but do not always converge and require more knowledge from the user for an efficient setup. In this work, we present an adaptive AMG method specifically designed to improve its usability and efficiency in the solution of structural problems. We show numerical results for several practical applications with millions of unknowns and compare our method with two state-of-the-art linear solvers proving its efficiency and robustness.Comment: 50 pages, 16 figures, submitted to CMAM

    Robust preconditioners via generalized eigenproblems for hybrid sparse linear solvers

    Get PDF
    International audienceThe solution of large sparse linear systems is one of the most time consuming kernels in many 4 numerical simulations. The domain decomposition community has developed many efficient and robust methods in 5 the last decades. While many of these solvers fall into the abstract Schwarz (aS) framework, their robustness has 6 originally been demonstrated on a case-by-case basis. In this paper, we propose a bound for the condition number 7 of all deflated aS methods provided that the coarse grid consists of the assembly of local components that contain 8 the kernel of some local operators. We show that classical results from the literature on particular instances of 9 aS methods can be retrieved from this bound. We then show that such a coarse grid correction can be explicitly 10 obtained algebraically via generalized eigenproblems, leading to a condition number independent of the number of 11 domains. This result can be readily applied to retrieve or improve the bounds previously obtained via generalized 12 eigenproblems in the particular cases of Neumann-Neumann (NN), Additive Schwarz (AS) and optimized Robin but 13 also generalizes them when applied with approximate local solvers. Interestingly, the proposed methodology turns 14 out to be a comparison of the considered particular aS method with generalized versions of both NN and AS for 15 tackling the lower and upper part of the spectrum, respectively. We furthermore show that the application of the 16 considered grid corrections in an additive fashion is robust in the AS case although it is not robust for aS methods in 17 general. In particular, the proposed framework allows for ensuring the robustness of the AS method applied on the 18 Schur complement (AS/S), either with deflation or additively, and with the freedom of relying on an approximate 19 local Schur complement. Numerical experiments illustrate these statements
    • …
    corecore