194 research outputs found
Computing and deflating eigenvalues while solving multiple right hand side linear systems in Quantum Chromodynamics
We present a new algorithm that computes eigenvalues and eigenvectors of a
Hermitian positive definite matrix while solving a linear system of equations
with Conjugate Gradient (CG). Traditionally, all the CG iteration vectors could
be saved and recombined through the eigenvectors of the tridiagonal projection
matrix, which is equivalent theoretically to unrestarted Lanczos. Our algorithm
capitalizes on the iteration vectors produced by CG to update only a small
window of vectors that approximate the eigenvectors. While this window is
restarted in a locally optimal way, the CG algorithm for the linear system is
unaffected. Yet, in all our experiments, this small window converges to the
required eigenvectors at a rate identical to unrestarted Lanczos. After the
solution of the linear system, eigenvectors that have not accurately converged
can be improved in an incremental fashion by solving additional linear systems.
In this case, eigenvectors identified in earlier systems can be used to
deflate, and thus accelerate, the convergence of subsequent systems. We have
used this algorithm with excellent results in lattice QCD applications, where
hundreds of right hand sides may be needed. Specifically, about 70 eigenvectors
are obtained to full accuracy after solving 24 right hand sides. Deflating
these from the large number of subsequent right hand sides removes the dreaded
critical slowdown, where the conditioning of the matrix increases as the quark
mass reaches a critical value. Our experiments show almost a constant number of
iterations for our method, regardless of quark mass, and speedups of 8 over
original CG for light quark masses.Comment: 22 pages, 26 eps figure
Deflation for the off-diagonal block in symmetric saddle point systems
Deflation techniques are typically used to shift isolated clusters of small
eigenvalues in order to obtain a tighter distribution and a smaller condition
number. Such changes induce a positive effect in the convergence behavior of
Krylov subspace methods, which are among the most popular iterative solvers for
large sparse linear systems. We develop a deflation strategy for symmetric
saddle point matrices by taking advantage of their underlying block structure.
The vectors used for deflation come from an elliptic singular value
decomposition relying on the generalized Golub-Kahan bidiagonalization process.
The block targeted by deflation is the off-diagonal one since it features a
problematic singular value distribution for certain applications. One example
is the Stokes flow in elongated channels, where the off-diagonal block has
several small, isolated singular values, depending on the length of the
channel. Applying deflation to specific parts of the saddle point system is
important when using solvers such as CRAIG, which operates on individual blocks
rather than the whole system. The theory is developed by extending the existing
framework for deflating square matrices before applying a Krylov subspace
method like MINRES. Numerical experiments confirm the merits of our strategy
and lead to interesting questions about using approximate vectors for
deflation.Comment: 26 pages, 12 figure
Which are Better Conditioned Meshes Adaptive, Uniform, Locally Refined or Localised
Adaptive, locally refined and locally adjusted meshes are preferred over
uniform meshes for capturing singular or localised solutions. Roughly speaking,
for a given degree of freedom a solution associated with adaptive, locally
refined and locally adjusted meshes is more accurate than the solution given by
uniform meshes. In this work, we answer the question which meshes are better
conditioned. We found, for approximately same degree of freedom (same size of
matrix), it is easier to solve a system of equations associated with an
adaptive mesh.Comment: 4 Page
Preconditioning for Sparse Linear Systems at the Dawn of the 21st Century: History, Current Developments, and Future Perspectives
Iterative methods are currently the solvers of choice for large sparse linear systems of equations. However, it is well known that the key factor for accelerating, or even allowing for, convergence is the preconditioner. The research on preconditioning techniques has characterized the last two decades. Nowadays, there are a number of different options to be considered when choosing the most appropriate preconditioner for the specific problem at hand. The present work provides an overview of the most popular algorithms available today, emphasizing the respective merits and limitations. The overview is restricted to algebraic preconditioners, that is, general-purpose algorithms requiring the knowledge of the system matrix only, independently of the specific problem it arises from. Along with the traditional distinction between incomplete factorizations and approximate inverses, the most recent developments are considered, including the scalable multigrid and parallel approaches which represent the current frontier of research. A separate section devoted to saddle-point problems, which arise in many different applications, closes the paper
Numerical methods for large-scale Lyapunov equations with symmetric banded data
The numerical solution of large-scale Lyapunov matrix equations with
symmetric banded data has so far received little attention in the rich
literature on Lyapunov equations. We aim to contribute to this open problem by
introducing two efficient solution methods, which respectively address the
cases of well conditioned and ill conditioned coefficient matrices. The
proposed approaches conveniently exploit the possibly hidden structure of the
solution matrix so as to deliver memory and computation saving approximate
solutions. Numerical experiments are reported to illustrate the potential of
the described methods
Condition number analysis and preconditioning of the finite cell method
The (Isogeometric) Finite Cell Method - in which a domain is immersed in a
structured background mesh - suffers from conditioning problems when cells with
small volume fractions occur. In this contribution, we establish a rigorous
scaling relation between the condition number of (I)FCM system matrices and the
smallest cell volume fraction. Ill-conditioning stems either from basis
functions being small on cells with small volume fractions, or from basis
functions being nearly linearly dependent on such cells. Based on these two
sources of ill-conditioning, an algebraic preconditioning technique is
developed, which is referred to as Symmetric Incomplete Permuted Inverse
Cholesky (SIPIC). A detailed numerical investigation of the effectivity of the
SIPIC preconditioner in improving (I)FCM condition numbers and in improving the
convergence speed and accuracy of iterative solvers is presented for the
Poisson problem and for two- and three-dimensional problems in linear
elasticity, in which Nitche's method is applied in either the normal or
tangential direction. The accuracy of the preconditioned iterative solver
enables mesh convergence studies of the finite cell method
Efficient p-multigrid spectral element model for water waves and marine offshore structures
In marine offshore engineering, cost-efficient simulation of unsteady water
waves and their nonlinear interaction with bodies are important to address a
broad range of engineering applications at increasing fidelity and scale. We
consider a fully nonlinear potential flow (FNPF) model discretized using a
Galerkin spectral element method to serve as a basis for handling both wave
propagation and wave-body interaction with high computational efficiency within
a single modellingapproach. We design and propose an efficientO(n)-scalable
computational procedure based on geometric p-multigrid for solving the Laplace
problem in the numerical scheme. The fluid volume and the geometric features of
complex bodies is represented accurately using high-order polynomial basis
functions and unstructured meshes with curvilinear prism elements. The new
p-multigrid spectralelement model can take advantage of the high-order
polynomial basis and thereby avoid generating a hierarchy of geometric meshes
with changing number of elements as required in geometric h-multigrid
approaches. We provide numerical benchmarks for the algorithmic and numerical
efficiency of the iterative geometric p-multigrid solver. Results of numerical
experiments are presented for wave propagation and for wave-body interaction in
an advanced case for focusing design waves interacting with a FPSO. Our study
shows, that the use of iterative geometric p-multigrid methods for theLaplace
problem can significantly improve run-time efficiency of FNPF simulators.Comment: Submitted to an international journal for peer revie
- …