70 research outputs found

### The Induced Dimension Reduction Method Applied to Convection-Diffusion-Reaction Problems

Discretization of (linearized) convection-diusion-reaction problems yields<br/>a large and sparse non symmetric linear system of equations,<br/>Ax = b: (1)<br/>In this work, we compare the computational behavior of the Induced Dimension<br/>Reduction method (IDR(s)) [10], with other short-recurrences Krylov methods,<br/>specically the Bi-Conjugate Gradient Method (Bi-CG) [1], restarted Generalized<br/>Minimal Residual (GMRES(m)) [4], and Bi-Conjugate Gradient Stabilized method<br/>(Bi-CGSTAB) [11].<br/

### Alternative correction equations in the Jacobi-Davidson method

The correction equation in the Jacobi-Davidson method is effective in a subspace orthogonal to the current eigenvector approximation, whereas for the continuation of the process only vectors orthogonal to the search subspace are of importance. Such a vector is obtained by orthogonalizing the (approximate) solution of the correction equation against the search subspace. As an alternative, a variant of the correction equation can be formulated that is restricted to the subspace orthogonal to the current search subspace. In this paper, we discuss the effectiveness of this variant. Our investigation is also motivated by the fact that the restricted correction equation can be used for avoiding stagnation in case of defective eigenvalues. Moreover, this equation plays a key role in the inexact TRQ method [18]

### An SVD-approach to Jacobi-Davidson solution of nonlinear Helmholtz eigenvalue problems

Numerical solution of the Helmholtz equation in an infinite domain often involves restriction of the domain to a bounded computational window where a numerical solution method is applied. On the boundary of the computational window artificial transparent boundary conditions are posed, for example, widely used perfectly matched layers (PMLs) or absorbing boundary conditions (ABCs). Recently proposed transparent-influx boundary conditions (TIBCs) resolve a number of drawbacks typically attributed to PMLs and ABCs, such as introduction of spurious solutions and the inability to have a tight computational window. Unlike the PMLs or ABCs, the TIBCs lead to a nonlinear dependence of the boundary integral operator on the frequency. Thus, a nonlinear Helmholtz eigenvalue problem arises. \ud This paper presents an approach for solving such nonlinear eigenproblems which is based on a truncated singular value decomposition (SVD) polynomial approximation of the nonlinearity and subsequent solution of the obtained approximate polynomial eigenproblem with the Jacobi-Davidson method

### Accurate approximations to eigenpairs using the harmonic Rayleigh Ritz Method

The problem in this paper is to construct accurate approximations from a subspace to eigenpairs for symmetric matrices using the harmonic Rayleigh-Ritz method. Morgan introduced this concept in [14] as an alternative forRayleigh-Ritz in large scale iterative methods for computing interior eigenpairs. The focus rests on the choice and in uence of the shift and error estimation. We also give a discussion of the dierences and similarities with the rened Ritz approach for symmetric matrices. Using some numerical experiments we compare dierent conditions for selecting appropriate harmonic Ritz vectors

### Differences in the effects of rouding errors in Krylov solvers for symmetric indefinite linear systems

The threeĀ­term Lanczos process for a symmetric matrix leads to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coe#cients, can be used for the solution of symmetric indefinite linear systems, by solving a reduced system in one way or another. This leads to wellĀ­known methods: MINRES (minimal residual), GMRES (generalized minimal residual), and SYMMLQ (symmetric LQ). We will discuss in what way and to what extent these approaches di#er in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the di#erent methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, which are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples

### Stability control for approximate implicit time-stepping schemes with minimal residual iterations

Implicit schemes for the integration of ODE's are popular when stabil- ity is more of concern than accuracy, for instance for the computation of a steady state solution. However, in particular for very large sys- tems the solution of the involved linear systems maybevery expensive. In this paper we study the solution of these linear systems by a mod- erate number of iterations of the minimum residual iterative method GMRES. Of course, this puts limits to the step size since these ap- proximate schemes may be viewed as explicit schemes and these are never unconditionally stable. It turns out that even a modest degree of approximationallows rather large time steps and we propose a simple mechanism for the control of the step size with respect to stability

### Low-dimensional Krylov subspace iterations for enhancing stability of time-step integration schemes

Inaconventional integration scheme of the Predictor#Corrector #PC# type, solution on the next time layer is obtained by solving the Corrector scheme equation with few #usually one# iterative steps of Richardson's method where the initial guess is taken from the Predictor scheme. Aiming to enhance stabilityofsuchascheme by performing a few optimal Krylov subspace iterations #e.g., k steps of GMRES, k 6 5# instead of Richardson's method steps, we get a family of Minimal Residual PC #MR-PC# time step integration schemes. The optimality #residual reduction# property of iterativeschemes like GMRES leads to a scheme whichis closest in the residual sense to the implicit Corrector scheme. Two particular MR-PC schemes are investigated here: Forward Euler Predictor # Backward Euler Corrector #of the #rst order# and Adams#2# Predictor # BDF2 Corrector #of the second order#. Practical aspects of using MR-PC scheme including adaptive step size control strategy will be discussed

### Quadratic eigenproblems are no problem

High-dimensional eigenproblems often arise in the solution of scientific problems involving stability or wave modeling. In this article we present results for a quadratic eigenproblem that we encountered in solving an acoustics problem, specifically in modeling the propagation of waves in a room in which one wall was constructed of sound-absorbing material. Efficient algorithms are known for the standard linear eigenproblem, Ax = x where A is a real or complex-valued square matrix of order n. Generalized eigenproblems of the form Ax = Bx, which occur in nite element formulations, are usually reduced to the standard problem, in a form such as B Ax = x. The reduction requires an expensive inversion operation for one of the matrices involved. Higher-order polynomial eigenproblems are also usually transformed into standard eigenproblems. We discuss here the second-degree (i.e., quadratic) eigenproblem 2C2 + C1 + C0 x = 0 in which the matrices Ci are square matrices

### Optimal a priori error bounds for the Rayleigh-Ritz method

We derive error bounds for the Rayleigh-Ritz method for the approximation to extremal eigenpairs of a symmetric matrix. The bounds are expressed in terms of the eigenvalues of the matrix and the angle between the subspace and the eigenvector. We also present a sharp bound

### Jacobi-Davidson type methods for generalized eigenproblems and polynomial eigenproblems : part I

In this paper we will show how the Jacobi-Davidson iterative method can be used to solve generalized eigenproblems. Similar ideas as for the standard eigenproblem are used, but the projections, that are required to reduce the given problem to a small manageable size, need more attention. We show that by proper choices for the projection operators quadratic convergence can be achieved. The advantage of our approach is that none of the involved operators needs to be inverted. It turns out that similar projections can be used for the iterative approximation of selected eigenvalues and eigenvectors of polynomial eigenvalue equations. This approach has already been used with great success for the solution of quadratic eigenproblems associated with acoustic problems
• ā¦