36 research outputs found

    86 PFLOPS Deep Potential Molecular Dynamics simulation of 100 million atoms with ab initio accuracy

    Full text link
    We present the GPU version of DeePMD-kit, which, upon training a deep neural network model using ab initio data, can drive extremely large-scale molecular dynamics (MD) simulation with ab initio accuracy. Our tests show that the GPU version is 7 times faster than the CPU version with the same power consumption. The code can scale up to the entire Summit supercomputer. For a copper system of 113, 246, 208 atoms, the code can perform one nanosecond MD simulation per day, reaching a peak performance of 86 PFLOPS (43% of the peak). Such unprecedented ability to perform MD simulation with ab initio accuracy opens up the possibility of studying many important issues in materials and molecules, such as heterogeneous catalysis, electrochemical cells, irradiation damage, crack propagation, and biochemical reactions.Comment: 29 pages, 11 figure

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Austrian High-Performance-Computing meeting (AHPC2020)

    Get PDF
    This booklet is a collection of abstracts presented at the AHPC conference

    CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences

    Get PDF
    This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030

    Knowledge is power: Quantum chemistry on novel computer architectures

    Get PDF
    In the first chapter of this thesis, a background of fundamental quantum chemistry concepts is provided. Chapter two contains an analysis of the performance and energy efficiency of various modern computer processor architectures while performing computational chemistry calculations. In chapter three, the processor architectural study is expanded to include parallel computational chemistry algorithms executed across multiple-node computer clusters. Chapter four describes a novel computational implementation of the fundamental Hartree-Fock method which significantly reduces computer memory requirements. In chapter five, a case study of quantum chemistry two-electron integral code interoperability is described. The final chapters of this work discuss applications of quantum chemistry. In chapter six, an investigation of the esterification of acetic acid on acid-functionalized silica is presented. In chapter seven, the application of ab initio molecular dynamics to study the photoisomerization and photocyclization of stilbene is discussed. Final concluding remarks are noted in chapter eight

    Knowledge is power: Quantum chemistry on novel computer architectures

    Get PDF
    In the first chapter of this thesis, a background of fundamental quantum chemistry concepts is provided. Chapter two contains an analysis of the performance and energy efficiency of various modern computer processor architectures while performing computational chemistry calculations. In chapter three, the processor architectural study is expanded to include parallel computational chemistry algorithms executed across multiple-node computer clusters. Chapter four describes a novel computational implementation of the fundamental Hartree-Fock method which significantly reduces computer memory requirements. In chapter five, a case study of quantum chemistry two-electron integral code interoperability is described. The final chapters of this work discuss applications of quantum chemistry. In chapter six, an investigation of the esterification of acetic acid on acid-functionalized silica is presented. In chapter seven, the application of ab initio molecular dynamics to study the photoisomerization and photocyclization of stilbene is discussed. Final concluding remarks are noted in chapter eight

    Memory systems for high-performance computing: the capacity and reliability implications

    Get PDF
    Memory systems are signicant contributors to the overall power requirements, energy consumption, and the operational cost of large high-performance computing systems (HPC). Limitations of main memory systems in terms of latency, bandwidth and capacity, can signicantly affect the performance of HPC applications, and can have strong negative impact on system scalability. In addition, errors in the main memory system can have a strong impact on the reliability, accessibility and serviceability of large-scale clusters. This thesis studies capacity and reliability issues in modern memory systems for high-performance computing. The choice of main memory capacity is an important aspect of high-performance computing memory system design. This choice becomes in- creasingly important now that 3D-stacked memories are entering the market. Compared with conventional DIMMs, 3D memory chiplets provide better performance and energy efficiency but lower memory capacities. Therefore the adoption of 3D-stacked memories in the HPC domain depends on whether we can find use cases that require much less memory than is available now. We analyze memory capacity requirements of important HPC benchmarks and applications. The study identifies the HPC applications and use cases with memory footprints that could be provided by 3D-stacked memory chiplets, making a first step towards the adoption of this novel technology in the HPC domain. For HPC domains where large memory capacities are required, we propose scaling-in of HPC applications to reduce energy consumption and the running time of a batch of jobs. We also propose upgrading the per-node memory capacity, which enables greater degree of scaling-in and additional energy savings. Memory system is one of the main causes of hardware failures. In each generation, the DRAM chip density and the amount of the memory in systems increase, while the DRAM technology process is constantly shrinking. Therefore, we could expect that the DRAM failures could have a serious impact on the future-systems reliability. This thesis studies DRAM errors observed on a production HPC system during a period of two years. We clearly distinguish between two different approaches for the DRAM error analysis: categorical analysis and the analysis of error rates. The first approach compares the errors at the DIMM level and partitions the DIMMs into various categories, e.g. based on whether they did or did not experience an error. The second approach is to analyze the error rates, i.e., to present the total number of errors relative to other statistics, typically the number of MB-hours or the duration of the observation period. We show that although DRAM error analysis may be performed with both approaches, they are not interchangeable and can lead to completely different conclusions. We further demonstrate the importance of providing statistical significance and presenting results that have practical value and real-life use. We show that various widely-accepted approaches for DRAM error analysis may provide data that appear to support an interesting conclusion, but are not statistically signifcant, meaning that they could merely be the result of chance. We hope the study of methods for DRAM error analysis presented in this thesis will become a standard for any future analysis of DRAM errors in the field.Los sistemas de memoria son contribuyentes significativos al consumo de energía y al coste de operación de los sistemas de computación de altas prestaciones (HPC). Limitaciones de los sistemas de memoria en términos de latencia, ancho de banda y capacidad, pueden afectar significativamente el rendimiento de aplicaciones HPC, y pueden tener un fuerte impacto negativo en la escalabilidad del sistema. Además, los errores en el sistema de memoria principal pueden tener un fuerte impacto sobre la confiabilidad, disponibilidad y capacidad de servicio de los clusters a gran escala. Esta tesis estudia problemas de capacidad y confiabilidad de los sistemas modernos de computación de altas prestaciones. La elección de capacidad de la memoria principal es un aspecto importante del diseño de sistemas de computación de altas prestaciones. Esta elección empieza ser cada vez más importante con memorias 3D apareciendo en el mercado. Comparados con los DIMMs convencionales, los chips de memoria 3D proporcionan mejor rendimiento y eficiencia energética, pero menores capacidades de memoria. Por lo tanto, la adopción de memorias 3D en el dominio HPC depende de si es posible encontrar casos de uso que requieren mucha menos memoria de la que está disponible ahora. Analizamos los requisitos de capacidad de memoria de importantes benchmarks y aplicaciones de HPC. El estudio identifica las aplicaciones de HPC y los casos de uso con huellas de memoria que podrían ser proporcionadas por los chips de memoria 3D dando un primer paso hacia la adopción de esta nueva tecnología en el dominio HPC. Para dominios HPC donde se requieren grandes capacidades de memoria, proponemos scaling-in de las aplicaciones de HPC para reducir el consumo de energía y el tiempo de ejecución de un lote de tareas. También proponemos ampliar la capacidad de memoria que permite un mayor grado de scaling-in y ahorros de energía adicionales. El sistema de memoria es una de las principales causas de fallas de hardware. En cada generación, la densidad del chip DRAM y la cantidad de memoria en el sistema aumentan, mientras el proceso de tecnología DRAM se reduce constantemente. Por lo tanto, podríamos esperar que los fallos DRAM podrían tener un serio impacto en la confiabilidad de los sistemas en el futuro. Esta tesis estudia los errores de DRAM observados en un sistema de producción HPC durante un período de dos años. Nosotros distinguimos claramente dos enfoques diferentes de análisis de error DRAM: análisis categórico y análisis de tasas de error. El primer enfoque compara los errores en el nivel DIMM y divide los DIMMs en varias categorías, por ejemplo, dependiendo si tuvieron o no un error. El segundo enfoque es analizar las tasas de error, es decir, presentar el número total de errores relativos a otras estadísticas, generalmente el número de MB-horas o la duración del período de observación. Mostramos que aunque el análisis de error DRAM se puede realizar con ambos enfoques, estos no son intercambiables y pueden llevar a conclusiones completamente diferentes. Demostramos la importancia de proporcionar significación estadística y presentar resultados que tienen un valor práctico y uso en la vida real. Mostramos que varios enfoques de análisis de errores de DRAM pueden proporcionar datos que apoyan una conclusión interesante, pero no son estadísticamente significativos, lo que significa que simplemente podrían ser el resultado de casualidad. Esperamos que el estudio de los métodos para el análisis de errores DRAM presentados en esta tesis se convertirá en un estándar para cualquier análisis futuro de errores de DRAM en el campo.Postprint (published version
    corecore