62 research outputs found

    Exponential integrators: tensor structured problems and applications

    Get PDF
    The solution of stiff systems of Ordinary Differential Equations (ODEs), that typically arise after spatial discretization of many important evolutionary Partial Differential Equations (PDEs), constitutes a topic of wide interest in numerical analysis. A prominent way to numerically integrate such systems involves using exponential integrators. In general, these kinds of schemes do not require the solution of (non)linear systems but rather the action of the matrix exponential and of some specific exponential-like functions (known in the literature as phi-functions). In this PhD thesis we aim at presenting efficient tensor-based tools to approximate such actions, both from a theoretical and from a practical point of view, when the problem has an underlying Kronecker sum structure. Moreover, we investigate the application of exponential integrators to compute numerical solutions of important equations in various fields, such as plasma physics, mean-field optimal control and computational chemistry. In any case, we provide several numerical examples and we perform extensive simulations, eventually exploiting modern hardware architectures such as multi-core Central Processing Units (CPUs) and Graphic Processing Units (GPUs). The results globally show the effectiveness and the superiority of the different approaches proposed

    Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods

    Full text link
    In this paper, we develop a new tensor-product based preconditioner for discontinuous Galerkin methods with polynomial degrees higher than those typically employed. This preconditioner uses an automatic, purely algebraic method to approximate the exact block Jacobi preconditioner by Kronecker products of several small, one-dimensional matrices. Traditional matrix-based preconditioners require O(p2d)\mathcal{O}(p^{2d}) storage and O(p3d)\mathcal{O}(p^{3d}) computational work, where pp is the degree of basis polynomials used, and dd is the spatial dimension. Our SVD-based tensor-product preconditioner requires O(pd+1)\mathcal{O}(p^{d+1}) storage, O(pd+1)\mathcal{O}(p^{d+1}) work in two spatial dimensions, and O(pd+2)\mathcal{O}(p^{d+2}) work in three spatial dimensions. Combined with a matrix-free Newton-Krylov solver, these preconditioners allow for the solution of DG systems in linear time in pp per degree of freedom in 2D, and reduce the computational complexity from O(p9)\mathcal{O}(p^9) to O(p5)\mathcal{O}(p^5) in 3D. Numerical results are shown in 2D and 3D for the advection and Euler equations, using polynomials of degree up to p=15p=15. For many test cases, the preconditioner results in similar iteration counts when compared with the exact block Jacobi preconditioner, and performance is significantly improved for high polynomial degrees pp.Comment: 40 pages, 15 figure

    Improving Efficiency of Rational Krylov Subspace Methods

    Get PDF
    This thesis studies two classes of numerical linear algebra problems, approximating the product of a function of a matrix with a vector, and solving the linear eigenvalue problem Av=λBvAv=\lambda Bv for a small number of eigenvalues. These problems are solved by rational Krylov subspace methods (RKSM). We present several improvements in two directions: pole selection and applying inexact methods. In Chapter 3, a flexible extended Krylov subspace method (F\mathcal{F}-EKSM) is considered for numerical approximation of the action of a matrix function f(A)f(A) to a vector bb, where the function ff is of Markov type. F\mathcal{F}-EKSM has the same framework as the extended Krylov subspace method (EKSM), replacing the zero pole in EKSM with a properly chosen fixed nonzero poles. For symmetric positive definite matrices, the optimal fixed pole is derived for F\mathcal{F}-EKSM to achieve the lowest possible upper bound on the asymptotic convergence factor, which is lower than that of EKSM. The analysis is based on properties of Faber polynomials of AA and (I−A/s)−1(I-A/s)^{-1}. For large and sparse matrices that can be handled efficiently by LU factorizations, numerical experiments show that F\mathcal{F}-EKSM and a variant of RKSM based on a small number of fixed poles outperform EKSM in both storage and runtime, and they usually have advantage over adaptive RKSM in runtime. Chapter 4 concerns the theory and development of inexact RKSM for approximating the action of a function of matrix f(A)f(A) to a column vector bb. At each step of RKSM, a shifted linear system of equations needs to be solved to enlarge the subspace. For large-scale problems, arising from discretizations of PDEs in 3D domains, such a linear system is usually solved by an iterative method approximately. The main question is how to relax the accuracy of these linear solves without negatively affecting the convergence for approximating f(A)bf(A)b. Our insight into this issue is obtained by exploring the residual bounds on the rational Krylov subspace approximations to f(A)bf(A)b, based on the decaying behavior of the entries in the first column of the matrix function of the block Rayleigh quotient of AA with respect to the rational Krylov subspaces. The decay bounds on these entries for both analytic functions and Markov functions can be efficiently and accurately evaluated by appropriate quadrature rules. A heuristic based on these bounds is proposed to relax the tolerances of the linear solves arising from each step of RKSM. As the algorithm progresses toward convergence, the linear solves can be performed with increasingly lower accuracy and computational cost. Numerical experiments for large nonsymmetric matrices show the effectiveness of the tolerance relaxation strategy for the inexact linear solves of RKSM. In Chapter 5, inexact RKSM are studied to solve large-scale nonsymmetric eigenvalue problems. Similar to the problem setting in Chapter 4, each iteration (outer step) of RKSM requires solution to a shifted linear system to enlarge the subspace, but these linear solves by direct methods are prohibitive due to the problem scale. Errors are introduced at each outer step if these linear systems are solved approximately by iterative methods (inner step), and these errors accumulate in the rational Krylov subspace. In this thesis, we derive an upper bound on the errors that can be introduced at each outer step to maintain the same convergence as exact RKSM for approximating an invariant subspace. Since this bound is inversely proportional to the current eigenresidual norm of the desired invariant subspace, the tolerance of iterative linear solves at each outer step can be relaxed with the outer iteration progress. A restarted variant of the inexact RKSM is also proposed. Numerical experiments show the effectiveness of relaxing the inner tolerance to save computational cost
    • …
    corecore