26,683 research outputs found

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Tackling Exascale Software Challenges in Molecular Dynamics Simulations with GROMACS

    Full text link
    GROMACS is a widely used package for biomolecular simulation, and over the last two decades it has evolved from small-scale efficiency to advanced heterogeneous acceleration and multi-level parallelism targeting some of the largest supercomputers in the world. Here, we describe some of the ways we have been able to realize this through the use of parallelization on all levels, combined with a constant focus on absolute performance. Release 4.6 of GROMACS uses SIMD acceleration on a wide range of architectures, GPU offloading acceleration, and both OpenMP and MPI parallelism within and between nodes, respectively. The recent work on acceleration made it necessary to revisit the fundamental algorithms of molecular simulation, including the concept of neighborsearching, and we discuss the present and future challenges we see for exascale simulation - in particular a very fine-grained task parallelism. We also discuss the software management, code peer review and continuous integration testing required for a project of this complexity.Comment: EASC 2014 conference proceedin

    Nuclear Theory and Science of the Facility for Rare Isotope Beams

    Full text link
    The Facility for Rare Isotope Beams (FRIB) will be a world-leading laboratory for the study of nuclear structure, reactions and astrophysics. Experiments with intense beams of rare isotopes produced at FRIB will guide us toward a comprehensive description of nuclei, elucidate the origin of the elements in the cosmos, help provide an understanding of matter in neutron stars, and establish the scientific foundation for innovative applications of nuclear science to society. FRIB will be essential for gaining access to key regions of the nuclear chart, where the measured nuclear properties will challenge established concepts, and highlight shortcomings and needed modifications to current theory. Conversely, nuclear theory will play a critical role in providing the intellectual framework for the science at FRIB, and will provide invaluable guidance to FRIB's experimental programs. This article overviews the broad scope of the FRIB theory effort, which reaches beyond the traditional fields of nuclear structure and reactions, and nuclear astrophysics, to explore exciting interdisciplinary boundaries with other areas. \keywords{Nuclear Structure and Reactions. Nuclear Astrophysics. Fundamental Interactions. High Performance Computing. Rare Isotopes. Radioactive Beams.Comment: 20 pages, 7 figure

    GNSS transpolar earth reflectometry exploriNg system (G-TERN): mission concept

    Get PDF
    The global navigation satellite system (GNSS) Transpolar Earth Reflectometry exploriNg system (G-TERN) was proposed in response to ESA's Earth Explorer 9 revised call by a team of 33 multi-disciplinary scientists. The primary objective of the mission is to quantify at high spatio-temporal resolution crucial characteristics, processes and interactions between sea ice, and other Earth system components in order to advance the understanding and prediction of climate change and its impacts on the environment and society. The objective is articulated through three key questions. 1) In a rapidly changing Arctic regime and under the resilient Antarctic sea ice trend, how will highly dynamic forcings and couplings between the various components of the ocean, atmosphere, and cryosphere modify or influence the processes governing the characteristics of the sea ice cover (ice production, growth, deformation, and melt)? 2) What are the impacts of extreme events and feedback mechanisms on sea ice evolution? 3) What are the effects of the cryosphere behaviors, either rapidly changing or resiliently stable, on the global oceanic and atmospheric circulation and mid-latitude extreme events? To contribute answering these questions, G-TERN will measure key parameters of the sea ice, the oceans, and the atmosphere with frequent and dense coverage over polar areas, becoming a “dynamic mapper”of the ice conditions, the ice production, and the loss in multiple time and space scales, and surrounding environment. Over polar areas, the G-TERN will measure sea ice surface elevation (<;10 cm precision), roughness, and polarimetry aspects at 30-km resolution and 3-days full coverage. G-TERN will implement the interferometric GNSS reflectometry concept, from a single satellite in near-polar orbit with capability for 12 simultaneous observations. Unlike currently orbiting GNSS reflectometry missions, the G-TERN uses the full GNSS available bandwidth to improve its ranging measurements. The lifetime would be 2025-2030 or optimally 2025-2035, covering key stages of the transition toward a nearly ice-free Arctic Ocean in summer. This paper describes the mission objectives, it reviews its measurement techniques, summarizes the suggested implementation, and finally, it estimates the expected performance.Peer ReviewedPostprint (published version

    Exascale Deep Learning for Climate Analytics

    Full text link
    We extract pixel-level masks of extreme weather patterns using variants of Tiramisu and DeepLabv3+ neural networks. We describe improvements to the software frameworks, input pipeline, and the network training algorithms necessary to efficiently scale deep learning on the Piz Daint and Summit systems. The Tiramisu network scales to 5300 P100 GPUs with a sustained throughput of 21.0 PF/s and parallel efficiency of 79.0%. DeepLabv3+ scales up to 27360 V100 GPUs with a sustained throughput of 325.8 PF/s and a parallel efficiency of 90.7% in single precision. By taking advantage of the FP16 Tensor Cores, a half-precision version of the DeepLabv3+ network achieves a peak and sustained throughput of 1.13 EF/s and 999.0 PF/s respectively.Comment: 12 pages, 5 tables, 4, figures, Super Computing Conference November 11-16, 2018, Dallas, TX, US
    • …
    corecore