123 research outputs found

    Simulating the weak death of the neutron in a femtoscale universe with near-Exascale computing

    Full text link
    The fundamental particle theory called Quantum Chromodynamics (QCD) dictates everything about protons and neutrons, from their intrinsic properties to interactions that bind them into atomic nuclei. Quantities that cannot be fully resolved through experiment, such as the neutron lifetime (whose precise value is important for the existence of light-atomic elements that make the sun shine and life possible), may be understood through numerical solutions to QCD. We directly solve QCD using Lattice Gauge Theory and calculate nuclear observables such as neutron lifetime. We have developed an improved algorithm that exponentially decreases the time-to solution and applied it on the new CORAL supercomputers, Sierra and Summit. We use run-time autotuning to distribute GPU resources, achieving 20% performance at low node count. We also developed optimal application mapping through a job manager, which allows CPU and GPU jobs to be interleaved, yielding 15% of peak performance when deployed across large fractions of CORAL.Comment: 2018 Gordon Bell Finalist: 9 pages, 9 figures; v2: fixed 2 typos and appended acknowledgement

    Development system for forecasting of supercomputer performance

    Get PDF
    Yüksek Lisans Bitirme ProjesiThe research purpose: The purpose of the graduation work is to create a system for forecasting the performance of supercomputers in Python. Forecasting is one of the most important topics in modern times and Python is one of the best programs used to create forecasting systems with the help of the different libraries. A system for forecasting the performance of supercomputers has been created here. The research result: The graduation work is devoted to forecasting the performance of modern supercomputers. The main stages of forecasting in the graduation work were as follows: Selection of modern supercomputers; Review of forecasting methods; Using the Stats models library in Python

    GPGPU Reliability Analysis: From Applications to Large Scale Systems

    Get PDF
    Over the past decade, GPUs have become an integral part of mainstream high-performance computing (HPC) facilities. Since applications running on HPC systems are usually long-running, any error or failure could result in significant loss in scientific productivity and system resources. Even worse, since HPC systems face severe resilience challenges as progressing towards exascale computing, it is imperative to develop a better understanding of the reliability of GPUs. This dissertation fills this gap by providing an understanding of the effects of soft errors on the entire system and on specific applications. To understand system-level reliability, a large-scale study on GPU soft errors in the field is conducted. The occurrences of GPU soft errors are linked to several temporal and spatial features, such as specific workloads, node location, temperature, and power consumption. Further, machine learning models are proposed to predict error occurrences on GPU nodes so as to proactively and dynamically turning on/off the costly error protection mechanisms based on prediction results. To understand the effects of soft errors at the application level, an effective fault-injection framework is designed aiming to understand the reliability and resilience characteristics of GPGPU applications. This framework is effective in terms of reducing the tremendous number of fault injection locations to a manageable size while still preserving remarkable accuracy. This framework is validated with both single-bit and multi-bit fault models for various GPGPU benchmarks. Lastly, taking advantage of the proposed fault-injection framework, this dissertation develops a hierarchical approach to understanding the error resilience characteristics of GPGPU applications at kernel, CTA, and warp levels. In addition, given that some corrupted application outputs due to soft errors may be acceptable, we present a use case to show how to enable low-overhead yet reliable GPU computing for GPGPU applications

    NASA Ames Environmental Sustainability Report 2011

    Get PDF
    The 2011 Ames Environmental Sustainability Report is the second in a series of reports describing the steps NASA Ames Research Center has taken toward assuring environmental sustainability in NASA Ames programs, projects, and activities. The Report highlights Center contributions toward meeting the Agency-wide goals under the 2011 NASA Strategic Sustainability Performance Program

    Comprendre et Guider la Gestion des Ressources de Calcul dans unContexte Multi-Modèles de Programmation

    Get PDF
    With the advent of multicore and manycore processors as buildingblocks of HPC supercomputers, many applications shift from relying solely on a distributed programming model (e.g., MPI) to mixing distributed and shared-memory models (e.g., MPI+OpenMP). This leads to a better exploitation of shared-memory communications and reduces the overall memory footprint.However, this evolution has a large impact on the software stack as applications’ developers do typically mix several programming models to scale over a largenumber of multicore nodes while coping with their hiearchical depth. Oneside effect of this programming approach is runtime stacking: mixing multiplemodels involve various runtime libraries to be alive at the same time. Dealing with different runtime systems may lead to a large number of execution flowsthat may not efficiently exploit the underlying resources.We first present a study of runtime stacking. It introduces stacking configurations and categories to describe how stacking can appear in applications.We explore runtime-stacking configurations (spatial and temporal) focusing on thread/process placement on hardware resources from different runtime libraries. We build this taxonomy based on the analysis of state-of-the-artruntime stacking and programming models.We then propose algorithms to detect the misuse of compute resources when running a hybrid parallel application. We have implemented these algorithms inside a dynamic tool, called the Overseer. This tool monitors applications,and outputs resource usage to the user with respect to the application timeline, focusing on overloading and underloading of compute resources.Finally, we propose a second external tool called Overmind, that monitors the thread/process management and (re)maps them to the underlyingcores taking into account the hardware topology and the application behavior. By capturing a global view of resource usage the Overmind adapts theprocess/thread placement, and aims at taking the best decision to enhance the use of each compute node inside a supercomputer. We demonstrate the relevance of our approach and show that our low-overhead implementation is able to achieve good performance even when running with configurations that would have ended up with bad resource usage.La simulation numérique reproduit les comportements physiquesque l’on peut observer dans la nature. Elle est utilisée pour modéliser des phénomènes complexes, impossible à prédire ou répliquer. Pour résoudre ces problèmes dans un temps raisonnable, nous avons recours au calcul haute performance (High Performance Computing ou HPC en anglais). Le HPC regroupe l’ensemble des techniques utilisées pour concevoir et utiliser les super calcula-teurs. Ces énormes machines ont pour objectifs de calculer toujours plus vite,plus précisément et plus efficacement.Pour atteindre ces objectifs, les machines sont de plus en plus complexes. La tendance actuelle est d’augmenter le nombre cœurs de calculs sur les processeurs,mais aussi d’augmenter le nombre de processeurs dans les machines. Les ma-chines deviennent de plus en hétérogènes, avec de nombreux éléments différents à utiliser en même temps pour extraire le maximum de performances. Pour pallier ces difficultés, les développeurs utilisent des modèles de programmation,dont le but est de simplifier l’utilisation de toutes ces ressources. Certains modèles, dits à mémoire distribuée (comme MPI), permettent d’abstraire l’envoi de messages entre les différents nœuds de calculs, d’autres dits à mémoire partagée, permettent de simplifier et d’optimiser l’utilisation de la mémoire partagée au sein des cœurs de calcul.Cependant, ces évolutions et cette complexification des supercalculateurs à un large impact sur la pile logicielle. Il est désormais nécessaire d’utiliser plusieurs modèles de programmation en même temps dans les applications.Ceci affecte non seulement le développement des codes de simulations, car les développeurs doivent manipuler plusieurs modèles en même temps, mais aussi les exécutions des simulations. Un effet de bord de cette approche de la programmation est l’empilement de modèles (‘Runtime Stacking’) : mélanger plusieurs modèles implique que plusieurs bibliothèques fonctionnent en même temps. Gérer plusieurs bibliothèques peut mener à un grand nombre de fils d’exécution utilisant les ressources sous-jacentes de manière non optimaleL’objectif de cette thèse est d’étudier l’empilement des modèles de programmation et d’optimiser l’utilisation de ressources de calculs par ces modèles au cours de l’exécution des simulations numériques. Nous avons dans un premier temps caractérisé les différentes manières de créer des codes de calcul mélangeant plusieurs modèles. Nous avons également étudié les différentes interactions que peuvent avoir ces modèles entre eux lors de l’exécution des simulations.De ces observations nous avons conçu des algorithmes permettant de détecter des utilisations de ressources non optimales. Enfin, nous avons développé un outil permettant de diriger automatiquement l’utilisation des ressources par les différents modèles de programmation

    Volcanic Processes Monitoring and Hazard Assessment Using Integration of Remote Sensing and Ground-Based Techniques

    Get PDF
    The monitoring of active volcanoes is a complex task based on multidisciplinary and integrated analyses that use ground, drones and satellite monitoring devices. Over time, and with the development of new technologies and increasing frequency of acquisition, the use of remote sensing to accomplish this important task has grown enormously. This is especially so with the use of drones and satellites for classifying eruptive events and detecting the opening of new vents, the spreading of lava flows on the surface or ash plumes in the atmosphere, the fallout of tephra on the ground, the intrusion of new magma within the volcano edifice, and the deformation preceding impending eruptions, and many other factors. The main challenge in using remote sensing techniques is to develop automated and reliable systems that may assist the decision maker in volcano monitoring, hazard assessment and risk reduction. The integration with ground-based techniques represents a valuable additional aspect that makes the proposed methods more robust and reinforces the results obtained. This collection of papers is focused on several active volcanoes, such as Stromboli, Etna, and Volcano in Italy; the Long Valley caldera and Kilauea volcano in the USA; and Cotopaxi in Ecuador
    • …
    corecore