27 research outputs found

    Failure avoidance techniques for HPC systems based on failure prediction

    Get PDF
    A increasingly larger percentage of computing capacity in today's large high-performance computing systems is wasted due to failures and recoveries. Moreover, it is expected that high performance computing will reach exascale within a decade, decreasing the mean time between failures to one day or even a few hours, making fault tolerance a major challenge for the HPC community. As a consequence, current research is focusing on providing fault tolerance strategies that aim to minimize fault's effects on applications. By far, the most popular and used techniques from this field are rollback-recovery protocols. However, existing rollback-recovery techniques have severe scalability limitations and without further optimizations the use of current protocols is put under serious questions for future exascale systems. A way of reducing the overhead induced by these strategies is by combining them with failure avoidance methods. Failure avoidance is based on a prediction model that detects fault occurrences ahead of time and allows preventive measures to be taken, such as task migration or checkpointing the application before failure. The same methodology can be generalized and applied to anomaly avoidance, where anomaly can mean anything from system failures to performance degradation at the application level. For this, monitoring systems require a reliable prediction system to give information on when failures will occur and at what location. Thus far, research in this field used ideal predictors that do not have any implementation in real HPC systems. This thesis focuses on analyzing and characterizing anomaly patterns at both the application and system levels and on offering solutions to prevent anomalies from affecting applications running in the system. Currently, there is no good characterization of normal behavior for system state data or how different components react to failures within HPC systems. For example, in case a node experiences a network failure and is incapable of generating log messages, the failure is announced in the log files by a lack of generated messages. Conversely, some component failures may cause logging a large numbers of notifications. For example, memory failures can result in a single faulty component generating hundreds or thousands of messages in less than a day. It is important to be able to capture the behavior of each event type and understand what is the normal behavior and how each failure type affects it. This idea represents the building block of a novel way of characterizing the state of the system in time by analyzing the properties of each event described in different system metrics, considering its own trend and behavior. The method introduces the integration between signal processing concepts and data mining techniques in the context of analysis for large-scale systems. By shaping the normal and faulty behavior of each event and of the whole system, appropriate models and methods for descriptive and forecasting purposes are proposed. After having an accurate overview of the whole system, the thesis analyzes how the prediction model impacts current fault tolerance techniques and in the end integrates it into a fault avoidance solution. This hybrid protocol optimizes the overhead that current fault tolerance strategies impose on applications and presents a viable solution for future large-scale systems

    Making Speculative Scheduling Robust to Incomplete Data

    Get PDF
    International audienceIn this work, we study the robustness of SpeculativeScheduling to data incompleteness. Speculative scheduling hasallowed to incorporate future types of applications into thedesign of HPC schedulers, specifically applications whose runtimeis not perfectly known but can be modeled with probabilitydistributions. Preliminary studies show the importance of spec-ulative scheduling in dealing with stochastic applications whenthe application runtime model is completely known. In this workwe show how one can extract enough information even fromincomplete behavioral data for a given HPC applications sothat speculative scheduling still performs well. Specifically, weshow that for synthetic runtimes who follow usual probabilitydistributions such as truncated normal or exponential, we canextract enough data from as little as 10 previous runs, to bewithin 5% of the solution which has exact information. For realtraces of applications, the performance with 10 data points varieswith the applications (within 20% of the full-knowledge solution),but converges fast (5% with 100 previous samples).Finally a side effect of this study is to show the importanceof the theoretical results obtained on continuous probabilitydistributions for speculative scheduling. Indeed, we observe thatthe solutions for such distributions are more robust to incompletedata than the solutions for discrete distributions

    Scheduling the I/O of HPC Applications Under Congestion

    Get PDF
    International audienceA significant percentage of the computing capacity of large-scale platforms is wasted because of interferences incurred by multiple applications that access a shared parallel file system concurrently. One solution to handling I/O bursts in large-scale HPC systems is to absorb them at an intermediate storage layer consisting of burst buffers. However, our analysis of the Argonne's Mira system shows that burst buffers cannot prevent congestion at all times. Consequently, I/O performance is dramatically degraded, showing in some cases a decrease in I/O throughput of 67%. In this paper, we analyze the effects of interference on application I/O bandwidth and propose several scheduling techniques to mitigate congestion. We show through extensive experiments that our global I/O scheduler is able to reduce the effects of congestion, even on systems where burst buffers are used, and can increase the overall system throughput up to 56%. We also show that it outperforms current Mira I/O schedulers

    Profiles of upcoming HPC Applications and their Impact on Reservation Strategies

    Get PDF
    International audienceWith the expected convergence between HPC, BigData and AI, new applications with different profiles are coming to HPC infrastructures. We aim at better understanding the features and needs of these applications in order to be able to run them efficiently on HPC platforms. The approach followed is bottom-up: we study thoroughly an emerging application, Spatially Localized Atlas Network Tiles (SLANT, originating from the neuroscience community) to understand its behavior. Based on these observations, we derive a generic, yet simple, application model (namely, a linear sequence of stochastic jobs). We expect this model to be representative for a large set of upcoming applications from emerging fields that start to require the computational power of HPC clusters without fitting the typical behavior of large-scale traditional applications. In a second step, we show how one can use this generic model in a scheduling framework. Specifically we consider the problem of making reservations (both time and memory) for an execution on an HPC platform based on the application expected resource requirements. We derive solutions using the model provided by the first step of this work. We experimentally show the robustness of the model, even with very few data points or using another application, to generate the model, and provide performance gains with regards to standard and more recent approaches used in the neuroscience community

    Etude d’applications émergentes en HPC et leurs impacts sur des stratégies d’ordonnancement

    Get PDF
    With the expected convergence between HPC, BigData and AI, newapplications with different profiles are coming to HPC infrastructures.We aim at better understanding the features and needs of theseapplications in order to be able to run them efficiently on HPC platforms.The approach followed is bottom-up: we study thoroughly an emergingapplication, Spatially Localized Atlas Network (SLANT, originating from the neuroscience community) to understand its behavior.Based on these observations, we derive a generic, yet simple, application model (namely, a linear sequence of stochastic jobs). We expect this model to be representative for a large set of upcoming applicationsthat require the computational power of HPC clusters without fitting the typical behavior oflarge-scale traditional applications.In a second step, we show how one can manipulate this generic model in a scheduling framework. Specifically we consider the problem of making reservations (both time andmemory) for an execution on an HPC platform.We derive solutions using the model of the first step of this work.We experimentally show the robustness of the model, even with very few data or with another application, to generate themodel, and provide performance gainsLa convergence entre les domaines du calcul haute-performance, du BigData et de l'intelligence artificiellefait émerger de nouveaux profils d'application sur les infrastructures HPC.Dans ce travail, nous proposons une étude de ces nouvelles applications afin de mieux comprendre leurs caractériques et besoinsdans le but d'optimiser leur exécution sur des plateformes HPC.Pour ce faire, nous adoptons une démarche ascendante. Premièrement, nous étudions en détail une application émergente, SLANT, provenant du domaine des neurosciences. Par un profilage détaillé de l'application, nous exposons ses principales caractéristiques ainsi que ses besoins en terme de ressources de calcul.A partir de ces observations, nous proposons un modèle d'application générique, pour le moment simple, composé d'une séquence linéaire de tâches stochastiques. Ce modèle devrait, selon nous, être adapté à une grande variété de ces applications émergentes qui requièrent la puissance de calcul des clusters HPC sans présenter le comportement typique des applications qui s'exécutent sur des machines à grande-échelle.Deuxièmement, nous montrons comment utiliser le modèle d'application générique dans le cadre du développement de stratégies d'ordonnancement. Plus précisément, nous nous intéressons à la conception de stratégies de réservations (à la fois en terme de temps de calcul et de mémoire).Nous proposons de telles solutions utilisant le modèle d'application générique exprimé dans la première étape de ce travail.Enfin, nous montrons la robustesse du modèle d'application et de nos stratégies d'ordonnancement au travers d'évaluations expérimentales de nos stratégies.Notamment, nous démontrons que nos solutions surpassent les approches standards de la communauté des neurosciences, même en cas de donnéespartielles ou d'extension à d'autres applications que SLANT

    Ordonnancement de tâches parallèles sous multiples ressources : Ordonnancement de Listes vs ordonnancement de Packs

    Get PDF
    Scheduling in High-Performance Computing (HPC) has been traditionally centered around computing resources (e.g., processors/cores). The ever-growing amount of data produced by modern scientific applications start to drive novel architectures and new computing frameworks to support more efficient data processing, transfer and storage for future HPC systems. This trend towards data-driven computing demands the scheduling solutions to also consider other resources (e.g., I/O, memory, cache) that can be shared amongst competing applications. In this paper, we study the problem of scheduling HPC applications while exploring the availability of multiple types of resources that could impact their performance. The goal is to minimize the overall execution time, or makespan, for a set of moldable tasks under multiple-resource constraints. Two scheduling paradigms, namely, list scheduling and pack scheduling, are compared through both theoretical analyses and experimental evaluations. Theoretically, we prove, for several algorithms falling in the two scheduling paradigms, tight approximation ratios that increase linearly with the number of resource types. As the complexity of direct solutions grows exponentially with the number of resource types, we also design a strategy to indirectly solve the problem via a transformation to a single-resource-type problem, which can significantly reduce the algorithms' running times without compromising their approximation ratios. Experiments conducted on Intel Knights Landing with two resource types (processor cores and high-bandwidth memory) and simulations designed on more resource types confirm the benefit of the transformation strategy and show that pack-based scheduling, despite having a worse theoretical bound, offers a practically promising and easy-to-implement solution, especially when more resource types need to be managed.L'ordonnancement en Calcul Haute-Performance est traditionnellement centré autour des ressources de calculs (processeurs, coeurs). Suite à l'explosion des quantités de données dans les applications scientifiques, de nouvelles architectures et nouveaux paradigmes de calcul apparaissent pour soutenir plus efficacement les calculs dirigés par l'accès aux données. Nous présentons ici des solutions algorithmiques qui prennent en compte cette multitude de ressources

    Julia as a unifying end-to-end workflow language on the Frontier exascale system

    Full text link
    We evaluate Julia as a single language and ecosystem paradigm powered by LLVM to develop workflow components for high-performance computing. We run a Gray-Scott, 2-variable diffusion-reaction application using a memory-bound, 7-point stencil kernel on Frontier, the US Department of Energy's first exascale supercomputer. We evaluate the performance, scaling, and trade-offs of (i) the computational kernel on AMD's MI250x GPUs, (ii) weak scaling up to 4,096 MPI processes/GPUs or 512 nodes, (iii) parallel I/O writes using the ADIOS2 library bindings, and (iv) Jupyter Notebooks for interactive analysis. Results suggest that although Julia generates a reasonable LLVM-IR, a nearly 50% performance difference exists vs. native AMD HIP stencil codes when running on the GPUs. As expected, we observed near-zero overhead when using MPI and parallel I/O bindings for system-wide installed implementations. Consequently, Julia emerges as a compelling high-performance and high-productivity workflow composition language, as measured on the fastest supercomputer in the world.Comment: 11 pages, 8 figures, accepted at the 18th Workshop on Workflows in Support of Large-Scale Science (WORKS23), IEEE/ACM The International Conference for High Performance Computing, Networking, Storage, and Analysis, SC2

    On-the-fly scheduling vs. reservation-based scheduling for unpredictable workflows

    Get PDF
    International audienceScientific insights in the coming decade will clearly depend on the effective processing of large datasets generated by dynamic heterogeneous applications typical of workflows in large data centers or of emerging fields like neuroscience. In this paper, we show how these big data workflows have a unique set of characteristics that pose challenges for leveraging HPC methodologies, particularly in scheduling. Our findings indicate that execution times for these workflows are highly unpredictable and are not correlated with the size of the dataset involved or the precise functions used in the analysis. We characterize this inherent variability and sketch the need for new scheduling approaches by quantifying significant gaps in achievable performance. Through simulations, we show how on-the-fly scheduling approaches can deliver benefits in both system-level and user-level performance measures. On average, we find improvements of up to 35% in system utilization and up to 45% in average stretch of the applications, illustrating the potential of increasing performance through new scheduling approaches

    Battling Failures

    Get PDF
    A large percentage of computing capacity in todays large high-performance computing systems is wasted due to failures and recoveries. The fear in our community is that future Exascale systems will fail so frequently that no useful work will be possible. My research is focusing on characterizing the events generated at the hardware, system or application level by understanding the complex correlations between different system components. This information is used to predict failures and as a consequence to minimize or prevent their effects on running applications. The image represents an overview of the overall analysis process: monitoring applications and their performance, modeling the system and the way anomalies propagate between components, analyzing the current state, diagnosing errors and predicting failures. The size and complexity of today's supercomputers is too large to manually inspector visualize all the events that occur during an application's execution. With tools like this, that adapt and learn as the system experiences new events, applications are allowed to take preventive actions that will increase their efficiency and as a consequence will allow them to complete their task even on future Exascale machines.Credits: Images provided by the National Center for Supercomputing Applications Visualization Laboratory
    corecore