21 research outputs found
Recycling BiCGSTAB with an Application to Parametric Model Order Reduction
Krylov subspace recycling is a process for accelerating the convergence of
sequences of linear systems. Based on this technique, the recycling BiCG
algorithm has been developed recently. Here, we now generalize and extend this
recycling theory to BiCGSTAB. Recycling BiCG focuses on efficiently solving
sequences of dual linear systems, while the focus here is on efficiently
solving sequences of single linear systems (assuming non-symmetric matrices for
both recycling BiCG and recycling BiCGSTAB).
As compared with other methods for solving sequences of single linear systems
with non-symmetric matrices (e.g., recycling variants of GMRES), BiCG based
recycling algorithms, like recycling BiCGSTAB, have the advantage that they
involve a short-term recurrence, and hence, do not suffer from storage issues
and are also cheaper with respect to the orthogonalizations.
We modify the BiCGSTAB algorithm to use a recycle space, which is built from
left and right approximate invariant subspaces. Using our algorithm for a
parametric model order reduction example gives good results. We show about 40%
savings in the number of matrix-vector products and about 35% savings in
runtime.Comment: 18 pages, 5 figures, Extended version of Max Planck Institute report
(MPIMD/13-21
Augmented Block-Arnoldi Recycling CFD Solvers
One of the limitations of recycled GCRO methods is the large amount of
computation required to orthogonalize the basis vectors of the newly generated
Krylov subspace for the approximate solution when combined with those of the
recycle subspace. Recent advancements in low synchronization Gram-Schmidt and
generalized minimal residual algorithms, Swirydowicz et
al.~\cite{2020-swirydowicz-nlawa}, Carson et al. \cite{Carson2022}, and Lund
\cite{Lund2022}, can be incorporated, thereby mitigating the loss of
orthogonality of the basis vectors. An augmented Arnoldi formulation of
recycling leads to a matrix decomposition and the associated algorithm can also
be viewed as a {\it block} Krylov method. Generalizations of both classical and
modified block Gram-Schmidt algorithms have been proposed, Carson et
al.~\cite{Carson2022}. Here, an inverse compact modified Gram-Schmidt
algorithm is applied for the inter-block orthogonalization scheme with a block
lower triangular correction matrix at iteration . When combined with a
weighted (oblique inner product) projection step, the inverse compact
scheme leads to significant (over 10 in certain cases) reductions in
the number of solver iterations per linear system. The weight is also
interpreted in terms of the angle between restart residuals in LGMRES, as
defined by Baker et al.\cite{Baker2005}. In many cases, the recycle subspace
eigen-spectrum can substitute for a preconditioner
Recycling Krylov Subspaces for Efficient Partitioned Solution of Aerostructural Adjoint Systems
Robust and efficient solvers for coupled-adjoint linear systems are crucial
to successful aerostructural optimization. Monolithic and partitioned
strategies can be applied. The monolithic approach is expected to offer better
robustness and efficiency for strong fluid-structure interactions. However, it
requires a high implementation cost and convergence may depend on appropriate
scaling and initialization strategies. On the other hand, the modularity of the
partitioned method enables a straightforward implementation while its
convergence may require relaxation. In addition, a partitioned solver leads to
a higher number of iterations to get the same level of convergence as the
monolithic one.
The objective of this paper is to accelerate the fluid-structure
coupled-adjoint partitioned solver by considering techniques borrowed from
approximate invariant subspace recycling strategies adapted to sequences of
linear systems with varying right-hand sides. Indeed, in a partitioned
framework, the structural source term attached to the fluid block of equations
affects the right-hand side with the nice property of quickly converging to a
constant value. We also consider deflation of approximate eigenvectors in
conjunction with advanced inner-outer Krylov solvers for the fluid block
equations. We demonstrate the benefit of these techniques by computing the
coupled derivatives of an aeroelastic configuration of the ONERA-M6 fixed wing
in transonic flow. For this exercise the fluid grid was coupled to a structural
model specifically designed to exhibit a high flexibility. All computations are
performed using RANS flow modeling and a fully linearized one-equation
Spalart-Allmaras turbulence model. Numerical simulations show up to 39%
reduction in matrix-vector products for GCRO-DR and up to 19% for the nested
FGCRO-DR solver.Comment: 42 pages, 21 figure
Iterative Solution Methods for Reduced-Order Models of Parameterized Partial Differential Equations
This dissertation considers efficient computational algorithms for solving parameterized discrete partial differential equations (PDEs) using techniques of reduced-order modeling. Parameterized equations of this type arise in numerous mathematical models. In some settings, e.g. sensitivity analysis, design optimization, and uncertainty quantification, it is necessary to compute discrete solutions of the PDEs at many parameter values. Accuracy considerations often lead to algebraic systems with many unknowns whose solution via traditional methods can be expensive. Reduced-order models use a reduced space to approximate the parameterized PDE, where the reduced space is of a significantly smaller dimension than that of the discrete PDE. Solving an approximation of the problem on the reduced space leads to reduction in cost, often with little loss of accuracy.
In the reduced basis method, an offline step finds an approximation of the solution space and an online step utilizes this approximation to solve a smaller reduced problem, which provides an accurate estimate of the solution. Traditionally, the reduced problem is solved using direct methods. However, the size of the reduced system needed to produce solutions of a given accuracy depends on the characteristics of the problem, and it may happen that the size is significantly smaller than that of the original discrete problem but large enough to make direct solution costly. In this scenario, it is more effective to use iterative methods to solve the reduced problem. To demonstrate this we construct preconditioners for the reduced-order models or construct well-conditioned reduced-order models. We demonstrate that by using iterative methods, reduced-order models of larger dimension can be effective.
There are several reasons that iterative methods are well suited to reduced- order modeling. In particular, we take advantage of the similarity of the realizations of parameterized systems, either by reusing preconditioners or by recycling Krylov vectors. These two approaches are shown to be effective when the underlying PDE is linear. For nonlinear problems, we utilize the discrete empirical interpolation method (DEIM) to cheaply evaluate the nonlinear components of the reduced model. The method identifies points in the PDE discretization necessary for representing the nonlinear component of the reduced model accurately. This approach incurs online computational costs that are independent of the spatial dimension of the discretized PDE. When this method is used to assemble the reduced model cheaply, iterative methods are shown to further improve efficiency in the online step.
Finally, when the traditional offline/online approach is ineffective for a given problem, reduced-order models can be used to accelerate the solution of the full model. We follow the solution model of Krylov subspace recycling methods for sequences of linear systems where the coefficient matrices vary. A Krylov subspace recycling method contains a reduced-order model and an iterative method that searches the space orthogonal to the reduced space. We once again use iterative solution techniques for the solution of the reduced models that arise in this context. In this case, the iterative methods converge quickly when the reduced basis is constructed to be naturally well conditioned