33 research outputs found

    Efficient Iterative Arbitrary High Order Methods: Adaptivity and Structure Preservation

    Get PDF
    Since its introduction in the late 17th century, in the independent works of Sir Isaac Newton and Gottfried Wilhelm Leibniz, differential calculus has shown to be a very powerful tool for the description of natural phenomena and technological processes. Its development led, in fact, to a deeper level of understanding of the general laws regulating the universe, from the microscopic to the macroscopic scale, and was indeed a crucial step towards the formalization of many fundamen- tal theories in several different fields. Countless models of natural and technological systems rely on differential formulations, either in the “simple” form of ordinary differential equations (ODEs) or in the more involved form of partial differential equations (PDEs). Extensive effort has been put, across the years, in the analytical study of such problems. In particular, their huge variety does not allow the development of a general theory and many different mathematical theories have been designed to deal with specific families of problems. However, if on the one hand such heterogeneity makes difficult to express statements holding for every differential problem, on the other hand almost all the mentioned problems share a common characteristic: the impossibility to find analytical solutions, besides very simple exceptions. At the moment in which this thesis has been written, the purely analytical approaches are not sufficient for a quantitative characterization of the analytical solutions, especially in real applications, and the only practical way to overcome such limitation is given by numerical analysis. Scientific computing constitutes indeed a valid tool to obtain accurate approximations of the analytical solutions to differential problems. Unfortunately, it does not lack critical aspects. The designing of numerical methods must be done in a careful way, as consistent discretizations may, in any case, lead to unstable schemes which are not usable in practice. Moreover, the need for very accurate approximations of the analytical solutions, which is typical of advanced applications, e.g., the design of aerospace machines, leads to massive simulations requiring huge computational resources, both in terms of computational times and memory, accessible only to supercomputers. This is why, the academic and the industrial communities, have lately put a lot of effort in trying to make such simulations more accessible. On the one hand the technological improvements, both at the hardware and at the software level, resulted in computers guaranteeing always higher performances, on the other hand several strategies have been developed to reduce the computational cost itself. In particular, in such context, two of the most successful strategies are • high order methods, which are well known to require, on smooth problems, smaller computational times for a fixed level of accuracy, • structure–preserving techniques, which are designed in such a way to enforce, at the discrete level, properties of the analytical solution, e.g., positivity of certain quantities, and they constitute the main focus of this thesis. For what concerns the first point, in this work, a paradigm for designing efficient adaptive arbitrary high order methods is formalized and investigated. The underlying idea was firstly introduced by Michael L. Minion in the early 2000s, in the context of the ladder Deferred Correc- tion (DeC) methods and of the multi-level spectral DeC methods, and is here generalized into a more abstract, flexible and rigorous framework, allowing for applications to many different settings. Arbitrary high order iterative schemes, i.e., schemes characterized by an iteration process converging to a high order discretization of the analytical solution of the problem under investi- gation, are considered and, in particular, within this family of methods, the attention is given to schemes for which one can provably show that one order of accuracy is gained at each iteration. Such methods are modified by matching the accuracy achieved in a specific iteration with the discretization accuracy of the same iteration. The new modified methods are characterized by several advantages. They are in general cheaper than the original ones, as the early iterations of the new methods involve low order discretizations, which make use of smaller structures and less computations. Moreover, they offer a natural way to perform p-adaptivity, i.e., adaptivity with respect to the order of accuracy adopted, stopping the iterations on the fly when appropriate conditions are met, in terms of convergence tolerances or in terms of structure preservation. This is, indeed, a valuable property for a numerical scheme. If on the one hand, we have remarked how the adoption of high order methods leads to smaller computational times for a fixed level of accuracy, on the other hand, the error of the method is not known a priori. In concrete applications, users are not interested in the order of the employed method, but rather on the final error being smaller than a prescribed tolerance. In principle, for a given discretization refinement, a low order method may be sufficient for reaching a certain accuracy, with higher order methods being more expensive and providing an unnecessary higher level of precision. Moreover, it is well known that in presence of non-smooth solutions, characterized by shocks, high order methods suffer from instabilities which may lead to blow-ups of the numerical solutions and, in the best case, the order of accuracy drops down to first order, disregarding for the accuracy of the adopted discretization. This is why, in real applications, high order methods should be always coupled with adaptive strategies capable of estimating the error and selecting the order or the discretization refinement accordingly. The designing of such strategies is far from being trivial, however, the described approach offers a natural way to do it. The framework is broadly investigated in the context of ODEs and hyperbolic PDEs, with applications to several schemes and stability analyses, showing that in many cases the efficient modifications have no impact on the stability. In particular, the idea is characterized for the abstract DeC formalism introduced by Re ́mi Abgrall in 2017 and, consequently, applied to many schemes which can be put in such formalism. Applications to arbitrary derivative (ADER) methods, Discontinuous Galerkin (DG) and Continuous Galerkin (CG) finite element methods (FEM) and Spectral Difference (SD) methods are reported to show the great advantages, the flexibility, the robustness and the potentiality of the introduced approach. For what concerns structure preservation, the main contributions address the preservation of the positivity of some physical quantities and the preservation, either exact or through super- convergence with respect to the order of accuracy of the underlying discretization, of particular analytical solutions, namely “well-balancing” or “C-property”, in the context of hyperbolic PDEs. Both aspects constitute open challenges for the community. • The violation, at the discrete level, of some physical constraints, e.g., the positivity of density and pressure in the Euler or Navier–Stokes equations or the positivity of the water height in the Shallow Water (SW) equations, results in an immediate failure of the simulation. The preservation of the positivity of the mentioned variables can be obtained for very high levels of refinement, resulting in very long computational times, or via suitable modifications of the adopted schemes. However, such modifications, in the context of high order methods, determine strong restrictions on the time discretization, which may balance the advantages associated with the adoption of high order methods. • Physical systems admit, in general, nontrivial steady states and can spend very long times in a neighborhood of such states. This is why we are interested, in many applications, in studying the evolution of small perturbations of such stationary solutions. In this context, we would like to avoid confusing the evolution of the perturbations with the natural noise due to the numerical discretization. This can be achieved either via mesh refinement, increasing the computational cost, or via suitable modifications of the numerical scheme to ensure that it preserves exactly the analytical solution of interest without wasting the accuracy toward any other general solution. The latter option looks indeed very appealing, however, it is also very challenging since the steady states are usually not available in closed- form and, more in general, we rely on numerics because we do not have the analytical solutions to the systems of PDEs that we are trying to solve. The novel adaptive framework, previously described, is exploited to design a robust limiter, which ensures the positivity of density and pressure in the predictor of ADER methods for the numerical resolution of the Euler and Navier–Stokes equations. Further, an alternative modification of DeC methods for ODEs, based on the modified Patankar trick, is applied in combination with a WENO space discretization to construct the first, up to the author’s knowledge, arbitrary high order method for the numerical resolution of the SW equations, which is positivity–preserving with respect to the water height without time step restrictions, with pre-existing unconditionally positive schemes up to order 3 only. The robustness of the approach has been tested on tough benchmarks, involving shocks and wet–dry areas, with Courant–Friedrichs–Lewy (CFL) values up to 0.9. Finally, several arbitrary high order well–balanced (WB) continuous interior penalty (CIP) stabilizations for CG/RD schemes for the numerical resolution of the SW equations are presented. All of them are specifically designed to exactly preserve the lake at rest steady state. Nevertheless, some of those, which make use of the notions of space residual and global flux, address the problem of the preservation of general steady states, whose analytical expression is not known in closed-form. In this context, such stabilizations have shown excellent performances, with strong superconvergences and a notable capability of capturing the constant momentum. Even though the monodimensional SW equations have been considered as reference setting, many of the presented stabilizations can be extended to other systems, e.g., the Euler equations with gravity, and to a multidimensional unstructured framework. All the methods are supported with a theoretical analysis and validated with proofs and numerical results

    SpECTRE: A Task-based Discontinuous Galerkin Code for Relativistic Astrophysics

    Get PDF
    We introduce a new relativistic astrophysics code, SpECTRE, that combines a discontinuous Galerkin method with a task-based parallelism model. SpECTRE's goal is to achieve more accurate solutions for challenging relativistic astrophysics problems such as core-collapse supernovae and binary neutron star mergers. The robustness of the discontinuous Galerkin method allows for the use of high-resolution shock capturing methods in regions where (relativistic) shocks are found, while exploiting high-order accuracy in smooth regions. A task-based parallelism model allows efficient use of the largest supercomputers for problems with a heterogeneous workload over disparate spatial and temporal scales. We argue that the locality and algorithmic structure of discontinuous Galerkin methods will exhibit good scalability within a task-based parallelism framework. We demonstrate the code on a wide variety of challenging benchmark problems in (non)-relativistic (magneto)-hydrodynamics. We demonstrate the code's scalability including its strong scaling on the NCSA Blue Waters supercomputer up to the machine's full capacity of 22,380 nodes using 671,400 threads.Comment: 41 pages, 13 figures, and 7 tables. Ancillary data contains simulation input file

    An Arbitrary High Order Well-Balanced ADER-DG Numerical Scheme for the Multilayer Shallow-Water Model with Variable Density

    Get PDF
    In this work, we present a novel numerical discretization of a variable pressure multilayer shallow water model. The model can be written as a hyperbolic PDE system and allows the simulation of density driven gravity currents in a shallow water framework. The proposed discretization consists in an unlimited arbitrary high order accurate (ADER) Discontinuous Galerkin (DG) method, which is then limited with the MOOD paradigm using an a posteriori subcell finite volume limiter. The resulting numerical scheme is arbitrary high order accurate in space and time for smooth solutions and does not destroy the natural subcell resolution inherent in the DG methods in the presence of strong gradients or discontinuities. A numerical strategy to preserve non-trivial stationary solutions is also discussed. The final method is very accurate in smooth regions even using coarse or very coarse meshes, as shown in the numerical simulations presented here. Finally, a comparison with a laboratory test, where empirical data are available, is also performed.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Funding for open access charge: Universidad de Málaga / CBU

    Communication-Avoiding Algorithms for a High-Performance Hyperbolic PDE Engine

    Get PDF
    The study of waves has always been an important subject of research. Earthquakes, for example, have a direct impact on the daily lives of millions of people while gravitational waves reveal insight into the composition and history of the Universe. These physical phenomena, despite being tackled traditionally by different fields of physics, have in common that they are modelled the same way mathematically: as a system of hyperbolic partial differential equations (PDEs). The ExaHyPE project (“An Exascale Hyperbolic PDE Engine") translates this similarity into a software engine that can be quickly adapted to simulate a wide range of hyperbolic partial differential equations. ExaHyPE’s key idea is that the user only specifies the physics while the engine takes care of the parallelisation and the interplay of the underlying numerical methods. Consequently, a first simulation code for a new hyperbolic PDE can often be realised within a few hours. This is a task that traditionally can take weeks, months, even years for researchers starting from scratch. My main contribution to ExaHyPE is the development of the core infrastructure. This comprises the development and implementation of ExaHyPE’s solvers and adaptive mesh refinement procedures, it’s MPI+X parallelisation as well as high-level aspects of ExaHyPE’s application-tailored code generation, which allows to adapt ExaHyPE to model many different hyperbolic PDE systems. Like any high-performance computing code, ExaHyPE has to tackle the challenges of the coming exascale computing era, notably network communication latencies and the growing memory wall. In this thesis, I propose memory-efficient realisations of ExaHyPE’s solvers that avoid data movement together with a novel task-based MPI+X parallelisation concept that allows to hide network communication behind computation in dynamically adaptive simulations
    corecore