66 research outputs found

    Recycling BiCGSTAB with an Application to Parametric Model Order Reduction

    Full text link
    Krylov subspace recycling is a process for accelerating the convergence of sequences of linear systems. Based on this technique, the recycling BiCG algorithm has been developed recently. Here, we now generalize and extend this recycling theory to BiCGSTAB. Recycling BiCG focuses on efficiently solving sequences of dual linear systems, while the focus here is on efficiently solving sequences of single linear systems (assuming non-symmetric matrices for both recycling BiCG and recycling BiCGSTAB). As compared with other methods for solving sequences of single linear systems with non-symmetric matrices (e.g., recycling variants of GMRES), BiCG based recycling algorithms, like recycling BiCGSTAB, have the advantage that they involve a short-term recurrence, and hence, do not suffer from storage issues and are also cheaper with respect to the orthogonalizations. We modify the BiCGSTAB algorithm to use a recycle space, which is built from left and right approximate invariant subspaces. Using our algorithm for a parametric model order reduction example gives good results. We show about 40% savings in the number of matrix-vector products and about 35% savings in runtime.Comment: 18 pages, 5 figures, Extended version of Max Planck Institute report (MPIMD/13-21

    Restarted Hessenberg method for solving shifted nonsymmetric linear systems

    Get PDF
    It is known that the restarted full orthogonalization method (FOM) outperforms the restarted generalized minimum residual (GMRES) method in several circumstances for solving shifted linear systems when the shifts are handled simultaneously. Many variants of them have been proposed to enhance their performance. We show that another restarted method, the restarted Hessenberg method [M. Heyouni, M\'ethode de Hessenberg G\'en\'eralis\'ee et Applications, Ph.D. Thesis, Universit\'e des Sciences et Technologies de Lille, France, 1996] based on Hessenberg procedure, can effectively be employed, which can provide accelerating convergence rate with respect to the number of restarts. Theoretical analysis shows that the new residual of shifted restarted Hessenberg method is still collinear with each other. In these cases where the proposed algorithm needs less enough CPU time elapsed to converge than the earlier established restarted shifted FOM, weighted restarted shifted FOM, and some other popular shifted iterative solvers based on the short-term vector recurrence, as shown via extensive numerical experiments involving the recent popular applications of handling the time fractional differential equations.Comment: 19 pages, 7 tables. Some corrections for updating the reference

    Sparse grid based Chebyshev HOPGD for parameterized linear systems

    Full text link
    We consider approximating solutions to parameterized linear systems of the form A(μ1,μ2)x(μ1,μ2)=bA(\mu_1,\mu_2) x(\mu_1,\mu_2) = b, where (μ1,μ2)∈R2(\mu_1, \mu_2) \in \mathbb{R}^2. Here the matrix A(μ1,μ2)∈Rn×nA(\mu_1,\mu_2) \in \mathbb{R}^{n \times n} is nonsingular, large, and sparse and depends nonlinearly on the parameters μ1\mu_1 and μ2\mu_2. Specifically, the system arises from a discretization of a partial differential equation and x(μ1,μ2)∈Rnx(\mu_1,\mu_2) \in \mathbb{R}^n, b∈Rnb \in \mathbb{R}^n. This work combines companion linearization with the Krylov subspace method preconditioned bi-conjugate gradient (BiCG) and a decomposition of a tensor matrix of precomputed solutions, called snapshots. As a result, a reduced order model of x(μ1,μ2)x(\mu_1,\mu_2) is constructed, and this model can be evaluated in a cheap way for many values of the parameters. The decomposition is performed efficiently using the sparse grid based higher-order proper generalized decomposition (HOPGD), and the snapshots are generated as one variable functions of μ1\mu_1 or of μ2\mu_2. Tensor decompositions performed on a set of snapshots can fail to reach a certain level of accuracy, and it is not possible to know a priori if the decomposition will be successful. This method offers a way to generate a new set of solutions on the same parameter space at little additional cost. An interpolation of the model is used to produce approximations on the entire parameter space, and this method can be used to solve a parameter estimation problem. Numerical examples of a parameterized Helmholtz equation show the competitiveness of our approach. The simulations are reproducible, and the software is available online
    • …
    corecore