10 research outputs found

    Preconditioned implicit time integration schemes for Maxwell’s equations on locally refined grids

    Get PDF
    In this paper, we consider an efficient implementation of higher-order implicit time integration schemes for spatially discretized linear Maxwell’s equations on locally refined meshes. In particular, our interest is in problems where only a few of the mesh elements are small while the majority of the elements is much larger. We suggest to approximate the solution of the linear systems arising in each time step by a preconditioned Krylov subspace method, e.g., the quasi-minimal residual method by Freund and Nachtigal [13]. Motivated by the analysis of locally implicit methods by Hochbruck and Sturm [25], we show how to construct a preconditioner in such a way that the number of iterations required by the Krylov subspace method to achieve a certain accuracy is bounded independently of the diameter of the small mesh elements. We prove this behavior by using Faber polynomials and complex approximation theory. The cost to apply the preconditioner consists of the solution of a small linear system, whose dimension corresponds to the degrees of freedom within the fine part of the mesh (and its next coarse neighbors). If this dimension is small compared to the size of the full mesh, the preconditioner is very efficient. We conclude by verifying our theoretical results with numerical experiments for the fourth-order Gauß-Legendre Runge–Kutta method

    On Krylov subspace approximations to the matrix exponential operator

    Get PDF

    First-Kind Galerkin Boundary Element Methods for the Hodge-Laplacian in Three Dimensions

    Get PDF
    International audienceBoundary value problems for the Euclidean Hodge-Laplacian in three dimensions HL:=curlcurlgrad−∆ HL := curl curl−grad div lead to variational formulations set in subspaces of H(curl,)H(div,)H(curl, Ω)∩ H(div, Ω), R3Ω ⊂ R 3 a bounded Lipschitz domain. Via a representation formula and Calderón identities we derive corresponding first-kind boundary integral equations set in trace spaces of H1()H 1 (Ω), H(curl,H(curl, Ω), and H(div,)H(div, Ω). They give rise to saddle-point variational formulations and feature kernels whose dimensions are linked to fundamental topological invariants of . Kernels of the same dimensions also arise for the linear systems generated by low-order conforming Galerkin boundary element (BE) discretization. On their complements, we can prove stability of the discretized problems, nevertheless. We prove that discretization does not affect the dimensions of the kernels and also illustrate this fact by numerical tests

    The effect of non-optimal bases on the convergence of Krylov subspace methods

    Get PDF
    There are many examples where non-orthogonality of a basis for Krylov subspace methods arises naturally. These methods usually require less storage or computational effort per iteration than methods using an orthonormal basis (optimal methods), but the convergence may be delayed. Truncated Krylov subspace methods and other examples of non-optimal methods have been shown to converge in many situations, often with small delay, but not in others. We explore the question of what is the effect of having a nonoptimal basis. We prove certain identities for the relative residual gap, i.e., the relative difference between the residuals of the optimal and non-optimal methods. These identities and related bounds provide insight into when the delay is small and convergence is achieved. Further understanding is gained by using a general theory of superlinear convergence recently developed. Our analysis confirms the observed fact that in exact arithmetic the orthogonality of the basis is not important, only the need to maintain linear independence is. Numerical examples illustrate our theoretical results

    Iterative solution of linear systems with improved arithmetic and result verification [online]

    Get PDF

    Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning

    Get PDF
    Many methods for computing eigenvalues of a large sparse matrix involve shift-invert transformations which require the solution of a shifted linear system at each step. This thesis deals with shift-invert iterative techniques for solving eigenvalue problems where the arising linear systems are solved inexactly using a second iterative technique. This approach leads to an inner-outer type algorithm. We provide convergence results for the outer iterative eigenvalue computation as well as techniques for efficient inner solves. In particular eigenvalue computations using inexact inverse iteration, the Jacobi-Davidson method without subspace expansion and the shift-invert Arnoldi method as a subspace method are investigated in detail. A general convergence result for inexact inverse iteration for the non-Hermitian generalised eigenvalue problem is given, using only minimal assumptions. This convergence result is obtained in two different ways; on the one hand, we use an equivalence result between inexact inverse iteration applied to the generalised eigenproblem and modified Newton's method; on the other hand, a splitting method is used which generalises the idea of orthogonal decomposition. Both approaches also include an analysis for the convergence theory of a version of inexact Jacobi-Davidson method, where equivalences between Newton's method, inverse iteration and the Jacobi-Davidson method are exploited. To improve the efficiency of the inner iterative solves we introduce a new tuning strategy which can be applied to any standard preconditioner. We give a detailed analysis on this new preconditioning idea and show how the number of iterations for the inner iterative method and hence the total number of iterations can be reduced significantly by the application of this tuning strategy. The analysis of the tuned preconditioner is carried out for both Hermitian and non-Hermitian eigenproblems. We show how the preconditioner can be implemented efficiently and illustrate its performance using various numerical examples. An equivalence result between the preconditioned simplified Jacobi-Davidson method and inexact inverse iteration with the tuned preconditioner is given. Finally, we discuss the shift-invert Arnoldi method both in the standard and restarted fashion. First, existing relaxation strategies for the outer iterative solves are extended to implicitly restarted Arnoldi's method. Second, we apply the idea of tuning the preconditioner to the inner iterative solve. As for inexact inverse iteration the tuned preconditioner for inexact Arnoldi's method is shown to provide significant savings in the number of inner solves. The theory in this thesis is supported by many numerical examples.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore