598 research outputs found

    The parallel computation of the smallest eigenpair of an acoustic problem with damping

    Get PDF
    Acoustic problems with damping may give rise to large quadratic eigenproblems. Efficient and parallelizable algorithms are required for solving these problems. The recently proposed Jacobi-Davidson method is well suited for parallel computing: no matrix decomposition and no back or forward substitutions are needed. This paper describes the parallel solution of the smallest eigenpair of a realistic and very large quadratic eigenproblem with the Jacobi-Davidson method

    Jacobi-Davidson type methods for generalized eigenproblems and polynomial eigenproblems : part I

    Get PDF
    In this paper we will show how the Jacobi-Davidson iterative method can be used to solve generalized eigenproblems. Similar ideas as for the standard eigenproblem are used, but the projections, that are required to reduce the given problem to a small manageable size, need more attention. We show that by proper choices for the projection operators quadratic convergence can be achieved. The advantage of our approach is that none of the involved operators needs to be inverted. It turns out that similar projections can be used for the iterative approximation of selected eigenvalues and eigenvectors of polynomial eigenvalue equations. This approach has already been used with great success for the solution of quadratic eigenproblems associated with acoustic problems

    Efficient numerical diagonalization of hermitian 3x3 matrices

    Full text link
    A very common problem in science is the numerical diagonalization of symmetric or hermitian 3x3 matrices. Since standard "black box" packages may be too inefficient if the number of matrices is large, we study several alternatives. We consider optimized implementations of the Jacobi, QL, and Cuppen algorithms and compare them with an analytical method relying on Cardano's formula for the eigenvalues and on vector cross products for the eigenvectors. Jacobi is the most accurate, but also the slowest method, while QL and Cuppen are good general purpose algorithms. The analytical algorithm outperforms the others by more than a factor of 2, but becomes inaccurate or may even fail completely if the matrix entries differ greatly in magnitude. This can mostly be circumvented by using a hybrid method, which falls back to QL if conditions are such that the analytical calculation might become too inaccurate. For all algorithms, we give an overview of the underlying mathematical ideas, and present detailed benchmark results. C and Fortran implementations of our code are available for download from http://www.mpi-hd.mpg.de/~globes/3x3/ .Comment: 13 pages, no figures, new hybrid algorithm added, matches published version, typo in Eq. (39) corrected; software library available at http://www.mpi-hd.mpg.de/~globes/3x3
    • …
    corecore