373 research outputs found

    Parallel-in-Time Multi-Level Integration of the Shallow-Water Equations on the Rotating Sphere

    Full text link
    The modeling of atmospheric processes in the context of weather and climate simulations is an important and computationally expensive challenge. The temporal integration of the underlying PDEs requires a very large number of time steps, even when the terms accounting for the propagation of fast atmospheric waves are treated implicitly. Therefore, the use of parallel-in-time integration schemes to reduce the time-to-solution is of increasing interest, particularly in the numerical weather forecasting field. We present a multi-level parallel-in-time integration method combining the Parallel Full Approximation Scheme in Space and Time (PFASST) with a spatial discretization based on Spherical Harmonics (SH). The iterative algorithm computes multiple time steps concurrently by interweaving parallel high-order fine corrections and serial corrections performed on a coarsened problem. To do that, we design a methodology relying on the spectral basis of the SH to coarsen and interpolate the problem in space. The methods are evaluated on the shallow-water equations on the sphere using a set of tests commonly used in the atmospheric flow community. We assess the convergence of PFASST-SH upon refinement in time. We also investigate the impact of the coarsening strategy on the accuracy of the scheme, and specifically on its ability to capture the high-frequency modes accumulating in the solution. Finally, we study the computational cost of PFASST-SH to demonstrate that our scheme resolves the main features of the solution multiple times faster than the serial schemes

    Lecture 06: The Impact of Computer Architectures on the Design of Algebraic Multigrid Methods

    Get PDF
    Algebraic multigrid (AMG) is a popular iterative solver and preconditioner for large sparse linear systems. When designed well, it is algorithmically scalable, enabling it to solve increasingly larger systems efficiently. While it consists of various highly parallel building blocks, the original method also consisted of various highly sequential components. A large amount of research has been performed over several decades to design new components that perform well on high performance computers. As a matter of fact, AMG has shown to scale well to more than a million processes. However, with single-core speeds plateauing, future increases in computing performance need to rely on more complicated, often heterogenous computer architectures, which provide new challenges for efficient implementations of AMG. To meet these challenges and yield fast and efficient performance, solvers need to exhibit extreme levels of parallelism, and minimize data movement. In this talk, we will give an overview on how AMG has been impacted by the various architectures of high-performance computers to date and discuss our current efforts to continue to achieve good performance on emerging computer architectures
    • …
    corecore