33 research outputs found

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.Peer Reviewed"Article signat per 36 autors/es: Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, Luca Bonaventura, Hans-Joachim Bungartz, Sanjay Chatterjee, Florina M. Ciorba, Nathan DeBardeleben, Daniel Drzisga, Sebastian Eibl, Christian Engelmann, Wilfried N. Gansterer, Luc Giraud, Dominik G ̈oddeke, Marco Heisig, Fabienne Jezequel, Nils Kohl, Xiaoye Sherry Li, Romain Lion, Miriam Mehl, Paul Mycek, Michael Obersteiner, Enrique S. Quintana-Ortiz, Francesco Rizzi, Ulrich Rude, Martin Schulz, Fred Fung, Robert Speck, Linda Stals, Keita Teranishi, Samuel Thibault, Dominik Thonnes, Andreas Wagner and Barbara Wohlmuth"Postprint (author's final draft

    Advances in Energy System Optimization

    Get PDF
    The papers presented in this open access book address diverse challenges in decarbonizing energy systems, ranging from operational to investment planning problems, from market economics to technical and environmental considerations, from distribution grids to transmission grids, and from theoretical considerations to data provision concerns and applied case studies. While most papers have a clear methodological focus, they address policy-relevant questions at the same time. The target audience therefore includes academics and experts in industry as well as policy makers, who are interested in state-of-the-art quantitative modelling of policy relevant problems in energy systems. The 2nd International Symposium on Energy System Optimization (ISESO 2018) was held at the Karlsruhe Institute of Technology (KIT) under the symposium theme “Bridging the Gap Between Mathematical Modelling and Policy Support” on October 10th and 11th 2018. ISESO 2018 was organized by the KIT, the Heidelberg Institute for Theoretical Studies (HITS), the Heidelberg University, the German Aerospace Center and the University of Stuttgart

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest

    Efficient fault tolerance for selected scientific computing algorithms on heterogeneous and approximate computer architectures

    Get PDF
    Scientific computing and simulation technology play an essential role to solve central challenges in science and engineering. The high computational power of heterogeneous computer architectures allows to accelerate applications in these domains, which are often dominated by compute-intensive mathematical tasks. Scientific, economic and political decision processes increasingly rely on such applications and therefore induce a strong demand to compute correct and trustworthy results. However, the continued semiconductor technology scaling increasingly imposes serious threats to the reliability and efficiency of upcoming devices. Different reliability threats can cause crashes or erroneous results without indication. Software-based fault tolerance techniques can protect algorithmic tasks by adding appropriate operations to detect and correct errors at runtime. Major challenges are induced by the runtime overhead of such operations and by rounding errors in floating-point arithmetic that can cause false positives. The end of Dennard scaling induces central challenges to further increase the compute efficiency between semiconductor technology generations. Approximate computing exploits the inherent error resilience of different applications to achieve efficiency gains with respect to, for instance, power, energy, and execution times. However, scientific applications often induce strict accuracy requirements which require careful utilization of approximation techniques. This thesis provides fault tolerance and approximate computing methods that enable the reliable and efficient execution of linear algebra operations and Conjugate Gradient solvers using heterogeneous and approximate computer architectures. The presented fault tolerance techniques detect and correct errors at runtime with low runtime overhead and high error coverage. At the same time, these fault tolerance techniques are exploited to enable the execution of the Conjugate Gradient solvers on approximate hardware by monitoring the underlying error resilience while adjusting the approximation error accordingly. Besides, parameter evaluation and estimation methods are presented that determine the computational efficiency of application executions on approximate hardware. An extensive experimental evaluation shows the efficiency and efficacy of the presented methods with respect to the runtime overhead to detect and correct errors, the error coverage as well as the achieved energy reduction in executing the Conjugate Gradient solvers on approximate hardware

    Scientific Grand Challenges: Crosscutting Technologies for Computing at the Exascale - February 2-4, 2010, Washington, D.C.

    Full text link

    Proceedings, MSVSCC 2016

    Get PDF
    Proceedings of the 10th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 14, 2016 at VMASC in Suffolk, Virginia

    Modeling And Improving Oxygen Carrier Performance In Chemical Looping Combustion Systems

    Get PDF
    Countries across the world have different expectations on carbon dioxide (CO2) capturing and the willingness to commit to international agreements continually change. At present, the CO2 capture market is weak as industries are reluctant to take up the costs and risks associated with implementing the capture technologies. Globally, and in the United States of America (U.S.A.) in particular, the perception is that emerging energy technologies with carbon capture are too expensive or inefficient to attract investors without government backing and subsidies. Coal usage has accordingly declined. By expanding the coal value chain into more than just electricity generation, it can possibly attract new investments and improve confidence in novel carbon capture technologies. Chemical looping combustion is a technology that can utilize coal and benefit both the electricity and valuable chemicals market. This flexibility of chemical looping combustion represents a promising technology to integrate the required time flexibility so urgently needed within the U.S.A. electricity generation sector. Fostering the development and scalability of chemical looping combustion related technologies, especially using coal, rather than focusing purely on the expected cost reduction and usefulness of chemical looping combustion as a CO2 capture technology, can ensure stability within the electricity generation and coal industry of the U.S.A. Chemical looping combustion is an induced fuel combustion process that uses recyclable redox materials as oxygen carriers to transfer oxygen selectively from an air stream to a fuel reactor, thus eliminating the requirement for end-of-pipe CO2 gas separation processes. To date, no oxygen carriers have been identified or developed that exhibit adequate long-term performance. There is also a lack of sufficient experience related to the design and operation of full-scale chemical looping combustion systems. Oxygen carriers serve as oxygen sorbents that release or adsorb oxygen, depending on the temperature, pressure and gas composition within the chemical looping combustion system. Oxygen carrier performance is mainly characterized by its affinity to react under both oxidizing and reducing conditions and its resistance to attrition. Based on the research opportunities, two primary hypotheses have been developed: i) A laboratory-scale evaluation system, operating under high temperature and reacting conditions, can be used to assess oxygen carrier performance. The experimental results can be used to develop correlations for determining oxygen carrier lifetime in scaled-up processes. ii) A spouted fluid bed reactor can improve carbon conversion efficiencies as compared to a bubbling fluidized bed reactor. Computational fluid dynamic simulations can be used to model the movement of oxygen carriers in such a spouted fluid bed reactor to gain a better understanding of the transport phenomena involved deep within the reactor. To prove or disprove the research hypotheses, the research scope was broken down into three main efforts: i) Evaluate several materials being considered by the chemical looping combustion development community to ascertain whether a single test procedure is adequate for oxygen carrier performance characterization ii) Further develop the oxygen carrier performance evaluation methodology (based on jet attrition testing) to include a second attrition source (cyclonic attrition) critical in chemical looping combustion systems involving circulation of oxygen carriers iii) Assess whether a spouted fluid bed can be used for chemical looping combustion and if it is scalable using a modular approach based on experimental and computational fluid dynamic tools. This effort will target the development of a computational modeling tool for the design of a multi-zone spouted fluid bed. Parts i) and ii) of the research scope pertained to testing different oxygen carriers in a jet attrition unit and a cyclonic attrition unit. An attrition unit can be defined as a device that is used to attain information concerning the ability of material to resist particle size reduction. The ASTM D5757 test method is typically used to determine the relative attrition characteristics of fluid catalytic cracking (FCC) catalysts under ambient conditions. In contrast to the ASTM D5757 test method, the jet and cyclonic attrition units were set up to expose the oxygen carriers to various operating conditions that could typically be encountered in actual chemical looping combustion systems. The operating principle of the jet- and cyclonic-induced attrition systems provides a vast improvement over previous methods that neglect chemical and thermal stresses. The cyclonic attrition unit ultimately represents a more favorable test method for assessing the attrition of oxygen carriers compared to the jet attrition unit. The cyclonic attrition test method merely speeds up the particle impact frequency compared to large-scale cyclones. However, the particle impact velocity within the cyclonic attrition unit is similar to large-scale cyclones (9.0 – 27 m/s). The cyclonic attrition unit can therefore provide relevant attrition data on an oxygen carrier within 9 hours, using as little as 70 grams of material. Two attrition models (cyclonic and jet) were identified that could be used to investigate attrition rates at operational chemical looping combustion conditions. The models were based on the concept of efficiency within a comminution process. The models related particle attrition to the kinetic energy used to produce fines. The cyclonic attrition model provided the best fit for the attrition data with coefficients of determination ≄ 0.94. Part iii) of the research scope related to exploring the use of a spouted fluid bed as a reactor configuration for chemical looping combustion. The spouted fluid bed was identified as a suitable configuration to improve fuel conversion and operational flexibility over the typically employed bubbling fluidized bed designs. This part of the study had two objectives: i) to assess the viability of a single-spouted fluid bed as an efficient chemical looping combustion reactor, and ii) to assess if computational fluid dynamic based simulations can be employed to show the hydrodynamic behavior of both a single- and multi-spouted fluid bed reactor. A modeling and experimental approach were followed to accomplish the objectives. Firstly, Multiphase Flow with Interphase eXchanges (MFiX) software was used to establish a spouted fluid bed reactor design using the two-fluid model. An experimental setup was built to supplement the model. The experimental setup was modified for testing under high temperature, reacting conditions (1073 - 1273 K). The setup was operated in either a spouted fluid bed or a bubbling bed regime, to compare the performance attributes of each using a mixture of carbon monoxide and hydrogen as fuel. For the single-spouted fluid bed investigation, the cold flow model results provided key information for rapid experimental design and operating envelope determination. The single-spouted fluid bed modeling and experimental results illustrated the potential of the configuration to improve gas/solid contact, lower energy requirements and increase operational robustness in comparison to a bubbling fluidized bed reactor. The cold flow models proved adequate in depicting the intermittent spouting regime as well as providing valuable information pertaining to material circulation rate. The modeling and experimental work on the single-spouted fluid bed reactor were used as the starting point to investigate the scalability of the system into a multi-spouted fluid bed reactor. MFiX software was again used to design a multi-spouted fluid bed and compare the hydrodynamic aspects of the system to that of a bubbling fluidized bed. A reactor comprising nine spout/draft tubes, arranged in a 3x3 setup, was modeled in 2-D using the two-fluid model. The model incorporated both inlet and outlet regions to study to bulk movement of solids within the reactor design. The focus of this work was on capturing the hydrodynamic trends associated with a multi-spouted fluid bed. The modeling results indicated that the solids in a multi-spout system has a slightly narrower residence time distribution compared to that in a bubbling fluidized bed. The narrower residence time distribution could potentially improve fuel conversion in chemical looping combustion systems. Ultimately, a baseline model was configured that can be used to investigate alternative layouts of modular spouted fluid bed reactors for various applications

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
    corecore