19 research outputs found

    Experiments on a Parallel Nonlinear Jacobi–Davidson Algorithm

    Get PDF
    AbstractThe Jacobi–Davidson (JD) algorithm is very well suited for the computation of a few eigen-pairs of large sparse complex symmetric nonlinear eigenvalue problems. The performance of JD crucially depends on the treatment of the so-called correction equation, in particular the preconditioner, and the initial vector. Depending on the choice of the spectral shift and the accuracy of the solution, the convergence of JD can vary from linear to cubic. We investigate parallel preconditioners for the Krylov space method used to solve the correction equation.We apply our nonlinear Jacobi–Davidson (NLJD) method to quadratic eigenvalue problems that originate from the time-harmonic Maxwell equation for the modeling and simulation of resonating electromagnetic structures

    Efficient computation of matrix power-vector products: application for space-fractional diffusion problems

    Get PDF
    A novel algorithm is proposed for computing matrix-vector products A^\alpha v, where A is a symmetric positive semidefinite sparse matrix and \alpha > 0. The method can be applied for the efficient implementation of the matrix transformation method to solve space-fractional diffusion problems. The performance of the new algorithm is studied in a comparison with the conventional MATLAB subroutines to compute matrix powers

    A multigrid accelerated eigensolver for the Hermitian Wilson-Dirac operator in lattice QCD

    Full text link
    Eigenvalues of the Hermitian Wilson-Dirac operator are of special interest in several lattice QCD simulations, e.g., for noise reduction when evaluating all-to-all propagators. In this paper we present a Davidson-type eigensolver that utilizes the structural properties of the Hermitian Wilson-Dirac operator QQ to compute eigenpairs of this operator corresponding to small eigenvalues. The main idea is to exploit a synergy between the (outer) eigensolver and its (inner) iterative scheme which solves shifted linear systems. This is achieved by adapting the multigrid DD-α\alphaAMG algorithm to a solver for shifted systems involving the Hermitian Wilson-Dirac operator. We demonstrate that updating the coarse grid operator using eigenvector information obtained in the course of the generalized Davidson method is crucial to achieve good performance when calculating many eigenpairs, as our study of the local coherence shows. We compare our method with the commonly used software-packages PARPACK and PRIMME in numerical tests, where we are able to achieve significant improvements, with speed-ups of up to one order of magnitude and a near-linear scaling with respect to the number of eigenvalues. For illustration we compare the distribution of the small eigenvalues of QQ on a 64×32364\times 32^3 lattice with what is predicted by the Banks-Casher relation in the infinite volume limit

    A new stopping criterion for Krylov solvers applied in Interior Point Methods

    Full text link
    A surprising result is presented in this paper with possible far reaching consequences for any optimization technique which relies on Krylov subspace methods employed to solve the underlying linear equation systems. In this paper the advantages of the new technique are illustrated in the context of Interior Point Methods (IPMs). When an iterative method is applied to solve the linear equation system in IPMs, the attention is usually placed on accelerating their convergence by designing appropriate preconditioners, but the linear solver is applied as a black box solver with a standard termination criterion which asks for a sufficient reduction of the residual in the linear system. Such an approach often leads to an unnecessary 'oversolving' of linear equations. In this paper a new specialized termination criterion for Krylov methods used in IPMs is designed. It is derived from a deep understanding of IPM needs and is demonstrated to preserve the polynomial worst-case complexity of these methods. The new criterion has been adapted to the Conjugate Gradient (CG) and to the Minimum Residual method (MINRES) applied in the IPM context. The new criterion has been tested on a set of linear and quadratic optimization problems including compressed sensing, image processing and instances with partial differential equation constraints. Evidence gathered from these computational experiments shows that the new technique delivers significant improvements in terms of inner (linear) iterations and those translate into significant savings of the IPM solution time

    Accelerated Line-search and Trust-region Methods

    Full text link

    Convergence analysis of a block preconditioned steepest descent eigensolver with implicit deflation

    Full text link
    Gradient-type iterative methods for solving Hermitian eigenvalue problems can be accelerated by using preconditioning and deflation techniques. A preconditioned steepest descent iteration with implicit deflation (PSD-id) is one of such methods. The convergence behavior of the PSD-id is recently investigated based on the pioneering work of Samokish on the preconditioned steepest descent method (PSD). The resulting non-asymptotic estimates indicate a superlinear convergence of the PSD-id under strong assumptions on the initial guess. The present paper utilizes an alternative convergence analysis of the PSD by Neymeyr under much weaker assumptions. We embed Neymeyr's approach into the analysis of the PSD-id using a restricted formulation of the PSD-id. More importantly, we extend the new convergence analysis of the PSD-id to a practically preferred block version of the PSD-id, or BPSD-id, and show the cluster robustness of the BPSD-id. Numerical examples are provided to validate the theoretical estimates.Comment: 26 pages, 10 figure

    On Inner Iterations in the Shift-Invert Residual Arnoldi Method and the Jacobi--Davidson Method

    Full text link
    Using a new analysis approach, we establish a general convergence theory of the Shift-Invert Residual Arnoldi (SIRA) method for computing a simple eigenvalue nearest to a given target σ\sigma and the associated eigenvector. In SIRA, a subspace expansion vector at each step is obtained by solving a certain inner linear system. We prove that the inexact SIRA method mimics the exact SIRA well, that is, the former uses almost the same outer iterations to achieve the convergence as the latter does if all the inner linear systems are iteratively solved with {\em low} or {\em modest} accuracy during outer iterations. Based on the theory, we design practical stopping criteria for inner solves. Our analysis is on one step expansion of subspace and the approach applies to the Jacobi--Davidson (JD) method with the fixed target σ\sigma as well, and a similar general convergence theory is obtained for it. Numerical experiments confirm our theory and demonstrate that the inexact SIRA and JD are similarly effective and are considerably superior to the inexact SIA.Comment: 20 pages, 8 figure
    corecore