79 research outputs found

    The parallel computation of the smallest eigenpair of an acoustic problem with damping

    Get PDF
    Acoustic problems with damping may give rise to large quadratic eigenproblems. Efficient and parallelizable algorithms are required for solving these problems. The recently proposed Jacobi-Davidson method is well suited for parallel computing: no matrix decomposition and no back or forward substitutions are needed. This paper describes the parallel solution of the smallest eigenpair of a realistic and very large quadratic eigenproblem with the Jacobi-Davidson method

    Quadratic eigenproblems are no problem

    Get PDF
    High-dimensional eigenproblems often arise in the solution of scientific problems involving stability or wave modeling. In this article we present results for a quadratic eigenproblem that we encountered in solving an acoustics problem, specifically in modeling the propagation of waves in a room in which one wall was constructed of sound-absorbing material. Efficient algorithms are known for the standard linear eigenproblem, Ax = x where A is a real or complex-valued square matrix of order n. Generalized eigenproblems of the form Ax = Bx, which occur in nite element formulations, are usually reduced to the standard problem, in a form such as B Ax = x. The reduction requires an expensive inversion operation for one of the matrices involved. Higher-order polynomial eigenproblems are also usually transformed into standard eigenproblems. We discuss here the second-degree (i.e., quadratic) eigenproblem 2C2 + C1 + C0 x = 0 in which the matrices Ci are square matrices

    A parallel Jacobi-Davidson method for solving generalized eigenvalue problems in linear magnetohydrodynamics

    Get PDF
    We study the solution of generalized eigenproblems generated by a model which is used for stability investigation of tokamak plasmas. The eigenvalue problems are of the form Ax=lambdaBxA x = lambda B x, in which the complex matrices AA and BB are block tridiagonal, and BB is Hermitian positive definite. The Jacobi-Davidson method appears to be an excellent method for parallel computation of a few selected eigenvalues, because the basic ingredients are matrix-vector products, vector updates and inner products. The method is based on solving projected eigenproblems of order typically less than 30. The computation of an approximate solution of a large system of linear equations is usually the most expensive step in the algorithm. By using a suitable preconditioner, only a moderate number of steps of an inner iteration is required in order to retain fast convergence for the JD process. Several preconditioning techniques are discussed. It is shown, that for our application, a proper preconditioner is a complete block LU decomposition, which can be used for the computation of several eigenpairs. Reordering strategies based on a combination of block cyclic reduction and domain decomposition result in a well-parallelizable preconditioning technique. Results obtained on 64 processing elements of both a Cray T3D and a T3E will be shown

    Restarting parallel Jacobi-Davidson with both standard and harmonic Ritz values

    Get PDF
    We study the Jacobi-Davidson method for the solution of large generalized eigenproblems as they arise in MagnetoHydroDynamics. We have combined Jacobi-Davidson (using standard Ritz values) with a shift and invert technique. We apply a complete LU decomposition in which reordering strategies based on a combination of block cyclic reduction and domain decomposition result in a well-parallelizable algorithm. Moreover, we describe a variant of Jacobi-Davidson in which harmonic Ritz values are used. In this variant the same parallel LU decomposition is used, but this time as a preconditioner to solve the `correction` equation. The size of the relatively small projected eigenproblems which have to be solved in the Jacobi-Davidson method is controlled by several parameters. The influence of these parameters on both the parallel performance and convergence behaviour will be studied. Numerical results of Jacobi-Davidson obtained with standard and harmonic Ritz values will be shown. Executions have been performed on a Cray T3E

    Lanczos eigensolution method for high-performance computers

    Get PDF
    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors

    MRRR-based Eigensolvers for Multi-core Processors and Supercomputers

    Get PDF
    The real symmetric tridiagonal eigenproblem is of outstanding importance in numerical computations; it arises frequently as part of eigensolvers for standard and generalized dense Hermitian eigenproblems that are based on a reduction to tridiagonal form. For its solution, the algorithm of Multiple Relatively Robust Representations (MRRR or MR3 in short) - introduced in the late 1990s - is among the fastest methods. To compute k eigenpairs of a real n-by-n tridiagonal T, MRRR only requires O(kn) arithmetic operations; in contrast, all the other practical methods require O(k^2 n) or O(n^3) operations in the worst case. This thesis centers around the performance and accuracy of MRRR.Comment: PhD thesi

    Minimizing Communication for Eigenproblems and the Singular Value Decomposition

    Full text link
    Algorithms have two costs: arithmetic and communication. The latter represents the cost of moving data, either between levels of a memory hierarchy, or between processors over a network. Communication often dominates arithmetic and represents a rapidly increasing proportion of the total cost, so we seek algorithms that minimize communication. In \cite{BDHS10} lower bounds were presented on the amount of communication required for essentially all O(n3)O(n^3)-like algorithms for linear algebra, including eigenvalue problems and the SVD. Conventional algorithms, including those currently implemented in (Sca)LAPACK, perform asymptotically more communication than these lower bounds require. In this paper we present parallel and sequential eigenvalue algorithms (for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms that do attain these lower bounds, and analyze their convergence and communication costs.Comment: 43 pages, 11 figure
    • …
    corecore