224 research outputs found

    Krylov's methods in function space for waveform relaxation.

    Get PDF
    by Wai-Shing Luk.Thesis (Ph.D.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (leaves 104-113).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Functional Extension of Iterative Methods --- p.2Chapter 1.2 --- Applications in Circuit Simulation --- p.2Chapter 1.3 --- Multigrid Acceleration --- p.3Chapter 1.4 --- Why Hilbert Space? --- p.4Chapter 1.5 --- Parallel Implementation --- p.5Chapter 1.6 --- Domain Decomposition --- p.5Chapter 1.7 --- Contributions of This Thesis --- p.6Chapter 1.8 --- Outlines of the Thesis --- p.7Chapter 2 --- Waveform Relaxation Methods --- p.9Chapter 2.1 --- Basic Idea --- p.10Chapter 2.2 --- Linear Operators between Banach Spaces --- p.14Chapter 2.3 --- Waveform Relaxation Operators for ODE's --- p.16Chapter 2.4 --- Convergence Analysis --- p.19Chapter 2.4.1 --- Continuous-time Convergence Analysis --- p.20Chapter 2.4.2 --- Discrete-time Convergence Analysis --- p.21Chapter 2.5 --- Further references --- p.24Chapter 3 --- Waveform Krylov Subspace Methods --- p.25Chapter 3.1 --- Overview of Krylov Subspace Methods --- p.26Chapter 3.2 --- Krylov Subspace methods in Hilbert Space --- p.30Chapter 3.3 --- Waveform Krylov Subspace Methods --- p.31Chapter 3.4 --- Adjoint Operator for WBiCG and WQMR --- p.33Chapter 3.5 --- Numerical Experiments --- p.35Chapter 3.5.1 --- Test Circuits --- p.36Chapter 3.5.2 --- Unstructured Grid Problem --- p.39Chapter 4 --- Parallel Implementation Issues --- p.50Chapter 4.1 --- DECmpp 12000/Sx Computer and HPF --- p.50Chapter 4.2 --- Data Mapping Strategy --- p.55Chapter 4.3 --- Sparse Matrix Format --- p.55Chapter 4.4 --- Graph Coloring for Unstructured Grid Problems --- p.57Chapter 5 --- The Use of Inexact ODE Solver in Waveform Methods --- p.61Chapter 5.1 --- Inexact ODE Solver for Waveform Relaxation --- p.62Chapter 5.1.1 --- Convergence Analysis --- p.63Chapter 5.2 --- Inexact ODE Solver for Waveform Krylov Subspace Methods --- p.65Chapter 5.3 --- Experimental Results --- p.68Chapter 5.4 --- Concluding Remarks --- p.72Chapter 6 --- Domain Decomposition Technique --- p.80Chapter 6.1 --- Introduction --- p.80Chapter 6.2 --- Overlapped Schwarz Methods --- p.81Chapter 6.3 --- Numerical Experiments --- p.83Chapter 6.3.1 --- Delay Circuit --- p.83Chapter 6.3.2 --- Unstructured Grid Problem --- p.86Chapter 7 --- Conclusions --- p.90Chapter 7.1 --- Summary --- p.90Chapter 7.2 --- Future Works --- p.92Chapter A --- Pseudo Codes for Waveform Krylov Subspace Methods --- p.94Chapter B --- Overview of Recursive Spectral Bisection Method --- p.101Bibliography --- p.10

    Time stepping free numerical solution of linear differential equations: Krylov subspace versus waveform relaxation

    Get PDF
    The aim of this paper is two-fold. First, we propose an efficient implementation of the continuous time waveform relaxation method based on block Krylov subspaces. Second, we compare this new implementation against Krylov subspace methods combined with the shift and invert technique

    Shifted Laplacian multigrid for the elastic Helmholtz equation

    Full text link
    The shifted Laplacian multigrid method is a well known approach for preconditioning the indefinite linear system arising from the discretization of the acoustic Helmholtz equation. This equation is used to model wave propagation in the frequency domain. However, in some cases the acoustic equation is not sufficient for modeling the physics of the wave propagation, and one has to consider the elastic Helmholtz equation. Such a case arises in geophysical seismic imaging applications, where the earth's subsurface is the elastic medium. The elastic Helmholtz equation is much harder to solve than its acoustic counterpart, partially because it is three times larger, and partially because it models more complicated physics. Despite this, there are very few solvers available for the elastic equation compared to the array of solvers that are available for the acoustic one. In this work we extend the shifted Laplacian approach to the elastic Helmholtz equation, by combining the complex shift idea with approaches for linear elasticity. We demonstrate the efficiency and properties of our solver using numerical experiments for problems with heterogeneous media in two and three dimensions

    Nonlinear Preconditioning: How to use a Nonlinear Schwarz Method to Precondition Newton's Method

    Get PDF
    For linear problems, domain decomposition methods can be used directly as iterative solvers, but also as preconditioners for Krylov methods. In practice, Krylov acceleration is almost always used, since the Krylov method finds a much better residual polynomial than the stationary iteration, and thus converges much faster. We show in this paper that also for non-linear problems, domain decomposition methods can either be used directly as iterative solvers, or one can use them as preconditioners for Newton's method. For the concrete case of the parallel Schwarz method, we show that we obtain a preconditioner we call RASPEN (Restricted Additive Schwarz Preconditioned Exact Newton) which is similar to ASPIN (Additive Schwarz Preconditioned Inexact Newton), but with all components directly defined by the iterative method. This has the advantage that RASPEN already converges when used as an iterative solver, in contrast to ASPIN, and we thus get a substantially better preconditioner for Newton's method. The iterative construction also allows us to naturally define a coarse correction using the multigrid full approximation scheme, which leads to a convergent two level non-linear iterative domain decomposition method and a two level RASPEN non-linear preconditioner. We illustrate our findings with numerical results on the Forchheimer equation and a non-linear diffusion problem

    Waveform Relaxation with asynchronous time-integration

    Full text link
    We consider Waveform Relaxation (WR) methods for partitioned time-integration of surface-coupled multiphysics problems. WR allows independent time-discretizations on independent and adaptive time-grids, while maintaining high time-integration orders. Classical WR methods such as Jacobi or Gauss-Seidel WR are typically either parallel or converge quickly. We present a novel parallel WR method utilizing asynchronous communication techniques to get both properties. Classical WR methods exchange discrete functions after time-integration of a subproblem. We instead asynchronously exchange time-point solutions during time-integration and directly incorporate all new information in the interpolants. We show both continuous and time-discrete convergence in a framework that generalizes existing linear WR convergence theory. An algorithm for choosing optimal relaxation in our new WR method is presented. Convergence is demonstrated in two conjugate heat transfer examples. Our new method shows an improved performance over classical WR methods. In one example we show a partitioned coupling of the compressible Euler equations with a nonlinear heat equation, with subproblems implemented using the open source libraries DUNE and FEniCS

    A new ParaDiag time-parallel time integration method

    Full text link
    Time-parallel time integration has received a lot of attention in the high performance computing community over the past two decades. Indeed, it has been shown that parallel-in-time techniques have the potential to remedy one of the main computational drawbacks of parallel-in-space solvers. In particular, it is well-known that for large-scale evolution problems space parallelization saturates long before all processing cores are effectively used on today's large scale parallel computers. Among the many approaches for time-parallel time integration, ParaDiag schemes have proved themselves to be a very effective approach. In this framework, the time stepping matrix or an approximation thereof is diagonalized by Fourier techniques, so that computations taking place at different time steps can be indeed carried out in parallel. We propose here a new ParaDiag algorithm combining the Sherman-Morrison-Woodbury formula and Krylov techniques. A panel of diverse numerical examples illustrates the potential of our new solver. In particular, we show that it performs very well compared to different ParaDiag algorithms recently proposed in the literature

    Anderson acceleration with approximate calculations: applications to scientific computing

    Full text link
    We provide rigorous theoretical bounds for Anderson acceleration (AA) that allow for efficient approximate calculations of the residual, which reduce computational time and memory storage while maintaining convergence. Specifically, we propose a reduced variant of AA, which consists in projecting the least squares to compute the Anderson mixing onto a subspace of reduced dimension. The dimensionality of this subspace adapts dynamically at each iteration as prescribed by computable heuristic quantities guided by the theoretical error bounds. The use of the heuristic to monitor the error introduced by approximate calculations, combined with the check on monotonicity of the convergence, ensures the convergence of the numerical scheme within a prescribed tolerance threshold on the residual. We numerically assess the performance of AA with approximate calculations on: (i) linear deterministic fixed-point iterations arising from the Richardson's scheme to solve linear systems with open-source benchmark matrices with various preconditioners and (ii) non-linear deterministic fixed-point iterations arising from non-linear time-dependent Boltzmann equations.Comment: 23 pages, 3 figures, 1 tabl
    • …
    corecore