628 research outputs found
A biconjugate gradient type algorithm on massively parallel architectures
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. Recently, Freund and Nachtigal have proposed a novel BCG type approach, the quasi-minimal residual method (QMR), which overcomes the problems of BCG. Here, an implementation is presented of QMR based on an s-step version of the nonsymmetric look-ahead Lanczos algorithm. The main feature of the s-step Lanczos algorithm is that, in general, all inner products, except for one, can be computed in parallel at the end of each block; this is unlike the other standard Lanczos process where inner products are generated sequentially. The resulting implementation of QMR is particularly attractive on massively parallel SIMD architectures, such as the Connection Machine
Dual Polar Graphs, a nil-DAHA of Rank One, and Non-Symmetric Dual q-Krawtchouk Polynomials
Let be a dual polar graph with diameter , having as
vertices the maximal isotropic subspaces of a finite-dimensional vector space
over the finite field equipped with a non-degenerate form
(alternating, quadratic, or Hermitian) with Witt index . From a pair of a
vertex of and a maximal clique containing , we construct a
-dimensional irreducible module for a nil-DAHA of type ,
and establish its connection to the generalized Terwilliger algebra with
respect to , . Using this module, we then define the non-symmetric dual
-Krawtchouk polynomials and derive their recurrence and orthogonality
relations from the combinatorial points of view. We note that our results do
not depend essentially on the particular choice of the pair , , and that
all the formulas are described in terms of , , and one other scalar which
we assign to based on the type of the form.Comment: an extended abstract of this work appeared in proceedings for FPSAC
201
Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices
We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported
Optimal Chebyshev polynomials on ellipses in the complex plane
The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases
A "missing" family of classical orthogonal polynomials
We study a family of "classical" orthogonal polynomials which satisfy (apart
from a 3-term recurrence relation) an eigenvalue problem with a differential
operator of Dunkl-type. These polynomials can be obtained from the little
-Jacobi polynomials in the limit . We also show that these polynomials
provide a nontrivial realization of the Askey-Wilson algebra for .Comment: 20 page
Linear iterative solvers for implicit ODE methods
The numerical solution of stiff initial value problems, which lead to the problem of solving large systems of mildly nonlinear equations are considered. For many problems derived from engineering and science, a solution is possible only with methods derived from iterative linear equation solvers. A common approach to solving the nonlinear equations is to employ an approximate solution obtained from an explicit method. The error is examined to determine how it is distributed among the stiff and non-stiff components, which bears on the choice of an iterative method. The conclusion is that error is (roughly) uniformly distributed, a fact that suggests the Chebyshev method (and the accompanying Manteuffel adaptive parameter algorithm). This method is described, also commenting on Richardson's method and its advantages for large problems. Richardson's method and the Chebyshev method with the Mantueffel algorithm are applied to the solution of the nonlinear equations by Newton's method
- …