2,273 research outputs found

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.Peer Reviewed"Article signat per 36 autors/es: Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, Luca Bonaventura, Hans-Joachim Bungartz, Sanjay Chatterjee, Florina M. Ciorba, Nathan DeBardeleben, Daniel Drzisga, Sebastian Eibl, Christian Engelmann, Wilfried N. Gansterer, Luc Giraud, Dominik G ̈oddeke, Marco Heisig, Fabienne Jezequel, Nils Kohl, Xiaoye Sherry Li, Romain Lion, Miriam Mehl, Paul Mycek, Michael Obersteiner, Enrique S. Quintana-Ortiz, Francesco Rizzi, Ulrich Rude, Martin Schulz, Fred Fung, Robert Speck, Linda Stals, Keita Teranishi, Samuel Thibault, Dominik Thonnes, Andreas Wagner and Barbara Wohlmuth"Postprint (author's final draft

    A massively parallel combination technique for the solution of high-dimensional PDEs

    Get PDF
    The solution of high-dimensional problems, especially high-dimensional partial differential equations (PDEs) that require the joint discretization of more than the usual three spatial dimensions and time, is one of the grand challenges in high performance computing (HPC). Due to the exponential growth of the number of unknowns - the so-called curse of dimensionality, it is in many cases not feasible to resolve the simulation domain as fine as required by the physical problem. Although the upcoming generation of exascale HPC systems theoretically provides the computational power to handle simulations that are out of reach today, it is expected that this is only achievable with new numerical algorithms that are able to efficiently exploit the massive parallelism of these systems. The sparse grid combination technique is a numerical scheme where the problem (e.g., a high-dimensional PDE) is solved on different coarse and anisotropic computational grids (so-called component grids), which are then combined to approximate the solution with a much higher target resolution than any of the individual component grids. This way, the total number of unknowns being computed is drastically reduced compared to the case when the problem is directly solved on a regular grid with the target resolution. Thus, the curse of dimensionality is mitigated. The combination technique is a promising approach to solve high-dimensional problems on future exascale systems. It offers two levels of parallelism: the component grids can be computed in parallel, independently and asynchronously of each other; and the computation of each component grid can be parallelized as well. This reduces the demand for global communication and synchronization, which is expected to be one of the limiting factors for classical discretization techniques to achieve scalability on exascale systems. Furthermore, the combination technique enables novel approaches to deal with the increasing fault rates expected from these systems. With the fault-tolerant combination technique it is possible to recover from failures without time-consuming checkpoint-restart mechanisms. In this work, new algorithms and data structures are presented that enable a massively parallel and fault-tolerant combination technique for time-dependent PDEs on large-scale HPC systems. The scalability of these algorithms is demonstrated on up to 180225 processor cores on the supercomputer Hazel Hen. Furthermore, the parallel combination technique is applied to gyrokinetic simulations in GENE, a software for the simulation of plasma microturbulence in fusion devices
    • …
    corecore