3 research outputs found
A Residual Based Sparse Approximate Inverse Preconditioning Procedure for Large Sparse Linear Systems
The SPAI algorithm, a sparse approximate inverse preconditioning technique
for large sparse linear systems, proposed by Grote and Huckle [SIAM J. Sci.
Comput., 18 (1997), pp.~838--853.], is based on the F-norm minimization and
computes a sparse approximate inverse of a large sparse matrix
adaptively. However, SPAI may be costly to seek the most profitable indices at
each loop and may be ineffective for preconditioning. In this paper, we
propose a residual based sparse approximate inverse preconditioning procedure
(RSAI), which, unlike SPAI, is based on only the {\em dominant} rather than all
information on the current residual and augments sparsity patterns adaptively
during the loops. RSAI is less costly to seek indices and is more effective to
capture a good approximate sparsity pattern of than SPAI. To control
the sparsity of and reduce computational cost, we develop a practical
RSAI() algorithm that drops small nonzero entries adaptively during the
process. Numerical experiments are reported to demonstrate that RSAI() is
at least competitive with SPAI and can be considerably more efficient and
effective than SPAI. They also indicate that RSAI() is comparable to the
PSAI() algorithm proposed by one of the authors in 2009.Comment: 18 pages, 1 figur
A Transformation Approach that Makes SPAI, PSAI and RSAI Procedures Efficient for Large Double Irregular Nonsymmetric Sparse Linear Systems
A sparse matrix is called double irregular sparse if it has at least one
relatively dense column and row, and it is double regular sparse if all the
columns and rows of it are sparse. The sparse approximate inverse
preconditioning procedures SPAI, PSAI() and RSAI() are costly and
even impractical to construct preconditioners for a large sparse nonsymmetric
linear system with the coefficient matrix being double irregular sparse, but
they are efficient for double regular sparse problems. Double irregular sparse
linear systems have a wide range of applications, and 4.4\% of the nonsymmetric
matrices in the Florida University collection are double irregular sparse. For
this class of problems, we propose a transformation approach, which consists of
four steps: (i) transform a given double irregular sparse problem into a small
number of double regular sparse ones with the same coefficient matrix
, (ii) use SPAI, PSAI() and RSAI() to construct sparse
approximate inverses of , (iii) solve the preconditioned double
regular sparse linear systems by Krylov solvers, and (iv) recover an
approximate solution of the original problem with a prescribed accuracy from
those of the double regular sparse ones. A number of theoretical and practical
issues are considered on the transformation approach. Numerical experiments on
a number of real-world problems confirm the very sharp superiority of the
transformation approach to the standard approach that preconditions the
original double irregular sparse problem by SPAI, PSAI() or RSAI()
and solves the resulting preconditioned system by Krylov solvers.Comment: 20 pages, 4 figure
A power Schur complement Low-Rank correction preconditioner for general sparse linear systems
An effective power based parallel preconditioner is proposed for general
large sparse linear systems. The preconditioner combines a power series
expansion method with some low-rank correction techniques, where the
Sherman-Morrison-Woodbury formula is utilized. A matrix splitting of the Schur
complement is proposed to expand the power series. The number of terms used in
the power series expansion can control the approximation accuracy of the
preconditioner to the inverse of the Schur complement. To construct the
preconditioner, graph partitioning is invoked to reorder the original
coefficient matrix, leading to a special block two-by-two matrix whose two
off-diagonal submatrices are block diagonal. Variables corresponding to
interface variables are obtained by solving a linear system with the coeffcient
matrix being the Schur complement. For the variables related to the interior
variables, one only needs to solve a block diagonal linear system. This can be
performed efficiently in parallel. Various numerical examples are provided to
illustrate that the efficiency of the proposed preconditioner