18 research outputs found

    A First Step Towards Automatically Building Network Representations

    Get PDF
    To fully harness Grids, users or middlewares must have some knowledge on the topology of the platform interconnection network. As such knowledge is usually not available, one must uses tools which automatically build a topological network model through some measurements. In this article, we define a methodology to assess the quality of these network model building tools, and we apply this methodology to representatives of the main classes of model builders and to two new algorithms. We show that none of the main existing techniques build models that enable to accurately predict the running time of simple application kernels for actual platforms. However some of the new algorithms we propose give excellent results in a wide range of situations

    A First Step Towards Automatically Building Network Representations

    Get PDF
    To fully harness Grids, users or middlewares must have some knowledge on the topology of the platform interconnection network. As such knowledge is usually not available, one must uses tools which automatically build a topological network model through some measurements. In this article, we define a methodology to assess the quality of these network model building tools, and we apply this methodology to representatives of the main classes of model builders and to two new algorithms. We show that none of the main existing techniques build models that enable to accurately predict the running time of simple application kernels for actual platforms. However some of the new algorithms we propose give excellent results in a wide range of situations.Afin de tirer le meilleur parti des grilles, les utilisateurs et les intergiciels doivent avoir connaissance de la topologie du réseau d’interconnexion de la plate-forme utilisée. Comme cette connaissance n’est généralement pas disponible a priori, on doit avoir recours à des outils construisant un modèle du réseau d’interconnexion à partir de mesures. Dans cet article nous définissons une méthodologie pour évaluer la qualité de ces outils de construction de modèles de réseau, et nous l’appliquons à des représentants des principaux types de reconstructeurs de topologies, ainsi qu’`à deux nouveaux algorithmes. Nous montrons qu’aucune des techniques existantes ne produit des modèles qui permettent de prédire avec précision le temps d’exécution sur les plates-formes actuelles de simples noyaux d’applications. Au contraire, un des nouveaux algorithmes obtient de très bons résultats dans des situations très variées

    Scheduling for Large Scale Distributed Computing Systems: Approaches and Performance Evaluation Issues

    Get PDF
    Although our everyday life and society now depends heavily oncommunication infrastructures and computation infrastructures,scientists and engineers have always been among the main consumers ofcomputing power. This document provides a coherent overview of theresearch I have conducted in the last 15 years and which targets themanagement and performance evaluation of large scale distributedcomputing infrastructures such as clusters, grids, desktop grids,volunteer computing platforms, ... when used for scientific computing.In the first part of this document, I present how I have addressedscheduling problems arising on distributed platforms (like computinggrids) with a particular emphasis on heterogeneity and multi-userissues, hence in connection with game theory. Most of these problemsare relaxed from a classical combinatorial optimization formulationinto a continuous form, which allows to easily account for keyplatform characteristics such as heterogeneity or complex topologywhile providing efficient practical and distributed solutions.The second part presents my main contributions to the SimGrid project,which is a simulation toolkit for building simulators of distributedapplications (originally designed for scheduling algorithm evaluationpurposes). It comprises a unified presentation of how the questions ofvalidation and scalability have been addressed in SimGrid as well asthoughts on specific challenges related to methodological aspects andto the application of SimGrid to the HPC context

    Scheduling and Dynamic Management of Applications over Grids

    Get PDF
    The work presented in this Thesis is about scheduling applications in computational Grids. We study how to better manage jobs in a grid middleware in order to improve the performance of the platform. Our solutions are designed to work at the middleware layer, thus allowing to keep the underlying architecture unmodified. First, we propose a reallocation mechanism to dynamically tackle errors that occur during the scheduling. Indeed, it is often necessary to provide a runtime estimation when submitting on a parallel computer so that it can compute a schedule. However, estimations are inherently inaccurate and scheduling decisions are based on incorrect data, and are therefore wrong. The reallocation mechanism we propose tackles this problem by moving waiting jobs between several parallel machines in order to reduce the scheduling errors due to inaccurate runtime estimates. Our second interest in the Thesis is the study of the scheduling of a climatology application on the Grid. To provide the best possible performances, we modeled the application as a Directed Acyclic Graph (DAG) and then proposed specific scheduling heuristics. To execute the application on the Grid, the middleware uses the knowledge of the application to find thebest schedule.Les travaux présentés dans cette thèse portent sur l'ordonnancement d'applications au sein d'un environnement de grille de calcul. Nous étudions comment mieux gérer les tâches au sein des intergiciels de grille, ceci dans l'objectif d'améliorer les performances globales de la plateforme. Les solutions que nous proposons se situent dans l'intergiciel, ce qui permet de conserver les architectures sous-jacentes sans les modifier. Dans un premier temps, nous proposons un mécanisme de réallocation permettant de prendre en compte dynamiquement les erreurs d'ordonnancement commises lors de la soumission de calculs. En effet, lors de la soumission sur une machine parallèle, il est souvent nécessaire de fournir une estimation du temps d'exécution afin que celle-ci puisse effectuer un ordonnancement. Cependant, les estimations ne sont pas précises et les décisions d'ordonnancement sont sans cesse remises en question. Le mécanisme de réallocation proposé permet de prendre en compte ces changements en déplaçant des calculs d'une machine parallèle à une autre. Le second point auquel nous nous intéressons dans cette thèse est l'ordonnancement d'une application de climatologie sur la grille. Afin de fournir les meilleures performances possibles nous avons modélisé l'application puis proposé des heuristiques spécifiques. Pour exécuter l'application sur une grille de calcul, l'intergiciel utilise ces connaissances sur l'application pour fournir le meilleur ordonnancement possible

    Bridging a Gap Between Research and Production: Contributions to Scheduling and Simulation

    Get PDF
    Large scale distributed computing infrastructures (e.g., data centers, grids, or clouds) are used by scientists from various domains to produce outstanding research results, such as the discovery of the Higgs Boson in High Energy Physics. These infrastructures are also studied by Computer Scientists to produce their own set of scientific results. Ideally, a virtuous circle should exist between Domain and Computer Scientists: the former raising challenges that could be addressed by the latter. Unfortunately, in many occasions, a gap exists that prevents such an ideal and fostering collaboration. This habilitation covers research works conducted in the fields of scheduling and simulation that contribute to the filling of this gap. It discusses the necessary conditions to achieve this goal and details concrete initiatives in this endeavor

    Communications collectives et ordonnancement en régime permanent sur plates-formes hétérogènes

    Get PDF
    The results presented in this document concern scheduling forlarge-scale heterogeneous platforms. We mainly focus on collectivecommunications taking place during the execution of distributedapplications: broadcast, scatter or reduction of data for instance. Westudy the steady-state operation of these communications and we aim atmaximizing the throughput of a series of similar communications. Thegoal is also to obtain an asymptotically optimal schedule as formakespan minimization. We present a general framework to study thesecommunications, which we use to assess the complexity of the problem foreach communication primitive. For a particular model of communication,the bidirectional one-port model, we develop a practical for solvingthe problem, based on a linear program in rational numbers. This resultsare illustrated by experiments on the Grid5000 platform. The study ofsteady-state operations is extended to scheduling multipleapplications on computing grids.Les travaux présentés dans cette thèse concernent l'ordonnancementpour les plates-formes hétérogènes à grande échelle. Nous nousintéressons principalement aux opérations de communicationscollectives comme la diffusion de données, la distribution de donnéesou la réduction. Nous étudions ces problèmes dans le cadre de leurrégime permanent, en optimisant le débit d'une série d'opérations decommunications, en vue d'obtenir un ordonnancement asymptotiquementoptimal du point de vue du temps d'exécution total. Après avoirprésenté un cadre général d'étude qui nous permet de connaître lacomplexité du problème pour chaque primitive, nous développons, pourle modèle de communication un-port bidirectionnel, une méthode derésolution pratique fondée sur la résolution d'un programme linéaireen rationnels. Cette étude du régime permanent est illustrée par desexpérimentations sur Grid5000 et se prolonge vers l'ordonnancementd'applications multiples sur des grilles de calcul

    Design of robust scheduling methodologies for high performance computing

    Get PDF
    Scientific applications are often large, complex, computationally-intensive, and irregular. Loops are often an abundant source of parallelism in scientific applications. Due to the ever-increasing computational needs of scientific applications, high performance computing (HPC) systems have become larger and more complex, offering increased parallelism at multiple hardware levels. Load imbalance, caused by irregular computational load per task and unpredictable computing system characteristics (system variability), often degrades the performance of applications. Besides, perturbations, such as reduced computing power, network latency availability, or failures, can severely impact the performance of the applications. System variability and perturbations are only expected to increase in future extreme-scale computing systems. Extrapolating the current failure rate to Exascale would result in a failure every 20 minutes. Such failure rate and perturbations would render the computing systems unusable. This doctoral thesis improves the performance of computationally-intensive scientific applications on HPC systems via robust load balancing. Robust scheduling ensures and maintains improved load balanced execution under unpredictable application and system characteristics. A number of dynamic loop self-scheduling (DLS) techniques have been introduced and successfully used in scientific applications between the 1980s and 2000s. These DLS techniques are not fault-tolerant as they were originally introduced. In this thesis, we identify three major research questions to achieve robust scheduling (1) How to ensure that the DLS techniques employed in scientific applications today adhere to their original design goals and specifications? (2) How to select a DLS technique that will achieve improved performance under perturbations? (3) How to tolerate perturbations during execution and maintain a load balanced execution on HPC systems? To answer the first question, we reproduced the original experiments that introduced the DLS techniques to verify their present implementation. Simulation is used to reproduce experiments on systems from the past. Realistic simulation induces a similar analysis and conclusions to the analysis of the native results. To this end, we devised an approach for bridging the native and simulative executions of parallel applications on HPC systems. This simulation approach is used to reproduce scheduling experiments on past and present systems to verify the implementation of DLS techniques. Given the multiple levels of parallelism offered by the present HPC systems, we analyzed the load imbalance in scientific applications, from computer vision, astrophysics, and mathematical kernels, at both thread and process levels. This analysis revealed a significant interplay between thread level and process level load balancing. We found that dynamic load balancing at the thread level propagates to the process level and vice versa. However, the best application performance is only achieved by two-level dynamic load balancing. Next, we examined the performance of applications under perturbations. We found that the most robust DLS technique does not deliver the best performance under various perturbations. The most efficient DLS technique changes by changing the application, the system, or perturbations during execution. This signifies the algorithm selection problem in the DLS. We leveraged realistic simulations to address the algorithm selection problem of scheduling under perturbations via a simulation assisted approach (SimAS), which answers the second question. SimAS dynamically selects DLS techniques that improve the performance depending on the application, system, and perturbations during the execution. To answer the third question, we introduced a robust dynamic load balancing (rDLB) approach for the robust self-scheduling of scientific applications under failures (question 3). rDLB proactively reschedules already allocated tasks and requires no detection of perturbations. rDLB tolerates up to P −1 processor failures (P is the number of processors allocated to the application) and boosts the flexibility of applications against nonfatal perturbations, such as reduced availability of resources. This thesis is the first to provide insights into the interplay between thread and process level dynamic load balancing in scientific applications. Verified DLS techniques, SimAS, and rDLB are integrated into an MPI-based dynamic load balancing library (DLS4LB), which supports thirteen DLS techniques, for robust dynamic load balancing of scientific applications on HPC systems. Using the methods devised in this thesis, we improved the performance of scientific applications by up to 21% via two-level dynamic load balancing. Under perturbations, we enhanced their performance by a factor of 7 and their flexibility by a factor of 30. This thesis opens up the horizons into understanding the interplay of load balancing between various levels of software parallelism and lays the ground for robust multilevel scheduling for the upcoming Exascale HPC systems and beyond

    MOTEUR: a data-intensive service-based workflow manager

    Get PDF
    I3S laboratory Research Report (I3S/RR-2006-07-FR), Sophia Antipolis, FranceMOTEUR is a service-based workflow manager designed to efficiently process data-intensive applications on grid infras- tructures. It exploits several levels of parallelism and can group services to reduce the workflow execution time. In addition, MOTEUR uses a generic web service wrapper to ease the use of legacy or non-service aware codes. In this report, we present MOTEUR and the optimization strategies implemented. We show how it is defining a precise data flows semantics to express complex data-intensive applications in a compact framework. MOTEUR' Service-Oriented Architecture is detailed, demon- strating the flexibility of the approach adopted. Results are given on a real application to medical images processing using two different grid infrastructures

    Energy Demand Response for High-Performance Computing Systems

    Get PDF
    The growing computational demand of scientific applications has greatly motivated the development of large-scale high-performance computing (HPC) systems in the past decade. To accommodate the increasing demand of applications, HPC systems have been going through dramatic architectural changes (e.g., introduction of many-core and multi-core systems, rapid growth of complex interconnection network for efficient communication between thousands of nodes), as well as significant increase in size (e.g., modern supercomputers consist of hundreds of thousands of nodes). With such changes in architecture and size, the energy consumption by these systems has increased significantly. With the advent of exascale supercomputers in the next few years, power consumption of the HPC systems will surely increase; some systems may even consume hundreds of megawatts of electricity. Demand response programs are designed to help the energy service providers to stabilize the power system by reducing the energy consumption of participating systems during the time periods of high demand power usage or temporary shortage in power supply. This dissertation focuses on developing energy-efficient demand-response models and algorithms to enable HPC system\u27s demand response participation. In the first part, we present interconnection network models for performance prediction of large-scale HPC applications. They are based on interconnected topologies widely used in HPC systems: dragonfly, torus, and fat-tree. Our interconnect models are fully integrated with an implementation of message-passing interface (MPI) that can mimic most of its functions with packet-level accuracy. Extensive experiments show that our integrated models provide good accuracy for predicting the network behavior, while at the same time allowing for good parallel scaling performance. In the second part, we present an energy-efficient demand-response model to reduce HPC systems\u27 energy consumption during demand response periods. We propose HPC job scheduling and resource provisioning schemes to enable HPC system\u27s emergency demand response participation. In the final part, we propose an economic demand-response model to allow both HPC operator and HPC users to jointly reduce HPC system\u27s energy cost. Our proposed model allows the participation of HPC systems in economic demand-response programs through a contract-based rewarding scheme that can incentivize HPC users to participate in demand response
    corecore