100 research outputs found

    Évaluation et Comparaison de MĂ©triques de Robustesse pour l'Ordonnancement de Graphes de TĂąches sur des SystĂšmes HĂ©tĂ©rogĂšnes

    Get PDF
    National audienceA schedule is robust if it is able to absorb variations in the task lengths while maintaining a stable solution. This intuitive notion of robustness has induces several interpretations and distinct metrics. However, there is no work that compare them. We first compare different methods to evaluate those metrics and we present then a statistical study to show how they are correlated in the context of task graph scheduling.Un ordonnancement est dit robuste s'il est capable d'absorber des variations dans les durées des tùches tout en maintenant une solution stable. Cette notion intuitive de la robustesse a induit beaucoup d'interprétations et de métriques différentes. Cependant, il n'existe pas de comparaison entre ces derniÚres. Nous comparons d'abord différentes méthodes d'évaluation de ces métriques et présentons ensuite une étude statistique de façon à montrer comment elles sont corrélées dans le cadre de l'ordonnancement de graphes de tùches

    A Proactive Approach for Coping with Uncertain Resource Availabilities on Desktop Grids

    No full text
    International audienceUncertainties stemming from multiple sources affect distributed systems and jeopardize their efficient utilization. Desktop grids are especially concerned by this issue as volunteers lending their resources may have irregular and unpredictable behaviors. Efficiently exploiting the power of such systems raises theoretical issues that received little attention in the literature. In this paper, we assume that there exist predictions on the intervals during which machines are available. When these predictions have a limited error, it is possible to schedule a set of jobs such that the effective total execution time will not be higher than the predicted one. We formally prove it is the case when scheduling jobs only in large intervals and when provisioning sufficient slacks to absorb uncertainties. We present multiple heuristics with various efficiencies and costs that are empirically assessed through simulations

    Scheduling Associative Reductions with Homogeneous Costs when Overlapping Communications and Computations

    Get PDF
    Reduction is a core operation in parallel computing. Optimizing its cost has a high potential impact on the application execution time, particularly in MPI and MapReduce computations. In this paper, we propose an optimal algorithm for scheduling associative reductions. We focus on the case where communications and computations can be overlapped to fully exploit resources. Our algorithm greedily builds a spanning tree by starting from the sink and by adding a parent at each iteration. Bounds on the completion time of optimal schedules are then characterized. To show the algorithm extensibility, we adapt it to model variations in which either communication or computation resources are limited. Moreover, we study two specific spanning trees: while the binomial tree is optimal when there is either no transfer or no computation, the Fibonacci tree is optimal when the transfer cost is equal to the computation cost. Finally, approximation ratios of strategies that are derived from those trees are drawn.L'opération de réduction est centrale au calcul parallÚle. Optimiser son coût peut avoir un fort impact sur le temps d'exécution d'une application, en particulier dans le cas de MPI ou de MapReduce. Dans ce rapport, nous proposons une solution optimale pour ordonnancer des réductions associatives. Nous considérons que les communications et les calculs peuvent se recouvrir afin d'exploiter pleinement les ressources. Notre algorithme construit gloutonnement un arbre couvrant en commençant par le puits et en rajoutant un parent à chaque itération. Des bornes sur les temps d'exécution d'ordonnancements optimaux sont ensuite caractérisées. Pour montrer l'extensibilité de l'algorithme, nous l'adaptons à des variations du modÚles dans lesquelles les communications ou les calculs sont limités. D'autre part, nous étudions deux arbres couvrants spécifiques: tandis que l'arbre binomial est optimal lorsqu'il n'y a soit aucun calcul, soit aucune communication, l'arbre de Fibonacci est optimal lorsque les temps de transfert et les temps de calcul sont égaux. Finalement, les facteurs d'approximation des stratégies dérivées de ces arbres sont déterminés

    On the complexity of task graph scheduling with transient and fail-stop failures

    Get PDF
    This paper deals with the complexity of task graph scheduling with transient and fail-stop failures. While computing the reliability of a given schedule is easy in the absence of task replication, the problem becomes much more difïŹcult when task replication is used. Our main result is that this problem is #P'- Complete (hence at least as hard as NP-Complete problems), with both transient and fails-stop processor failures. We also study the complexity of a restricted class of schedules, where a task cannot be scheduled before all replicas of all its predecessors have completed their execution

    MO-Greedy: an extended beam-search approach for solving a multi-criteria scheduling problem on heterogeneous machines

    Get PDF
    International audienceOptimization problems can often be tackled with respect to several objectives. In such cases, there can be several incomparable Pareto-optimal solutions. Computing or approximating such solutions is a major challenge in algorithm design. Here, we show how to use an extended beam-search technique to solve a multi-criteria scheduling problem for heterogeneous machines. This method, called MO-Greedy (for Multi-Objective greedy), allows the design of a multi-objective algorithm when a single-objective greedy one is known. We show that we can generate, in a single execution, a Pareto front optimized with respect to the preferences specified by the decision maker. We compare our approach to other heuristics and an approximation algorithm and show that the obtained front is, on average, better with our method

    A Scheduling Algorithm for Defeating Collusion

    Get PDF
    By exploiting idle time on volunteer machines, desktop grids provide a way to execute large sets of tasks with negligible maintenance and low cost. Although desktop grids are attractive for cost-conscious projects, relying on external resources may compromise the correctness of application execution due to the well-known unreliability of nodes. In this paper, we consider the most challenging threat model: organized groups of cheaters that may collude to produce incorrect results. By using a previously described on-line algorithm for detecting collusion and characterizing the participant behaviors, we propose a scheduling algorithm that tackles collusion. Using several real-life traces, we show that our approach min- imizes redundancy while maximizing the number of correctly certified results

    Controlling and Assessing Correlations of Cost Matrices in Heterogeneous Scheduling

    Get PDF
    International audienceThis paper considers the problem of allocating independent tasks to unrelated machines such as to minimize the maximum completion time. Testing heuristics for this problem requires the generation of cost matrices that specify the execution time of each task on each machine. Numerous studies showed that the task and machine heterogeneities belong to the properties impacting heuristics performance the most. This study focuses on orthogonal properties, the average correlations between each pair of rows and each pair of columns, which is a proximity measure with uniform instances 1. Cost matrices generated with a novel generation method show the effect of these correlations on the performance of several heuristics from the literature. In particular, EFT performance depends on whether the tasks are more correlated than the machines and HLPT performs the best when both correlations are close to one

    Defining and Controlling the Heterogeneity of a Cluster: the Wrekavoc Tool

    Get PDF
    International audienceThe experimental validation and the testing of solutions that are designed for heterogeneous environments is challenging. We introduce Wrekavoc as an accurate tool for this purpose: it runs unmodified applications on emulated multisite heterogeneous platforms. Its principal technique consists in downgrading the performance of the platform characteristics in a prescribed way. The platform characteristics include the compute nodes themselves (CPU and memory) and the interconnection network for which a controlled overlay network above the homogeneous cluster is built. In this article we describe the tool, its performance, its accuracy and its scalability. Results show that Wrekavoc is a very versatile tool that is useful to perform high-quality experiments (in terms of reproducibility, realism, control, etc

    Online Scheduling of Sequential Task Graphs on Hybrid Platforms

    Get PDF
    Modern computing platforms commonly include accelerators. We target the problem of scheduling applications modeled as task graphs on hybrid platforms made of two types of resources, such as CPUs and GPUs. We consider that task graphs are uncovered dynamically, and that the scheduler has information only on the available tasks, i.e., tasks whose predecessors have all been completed. Each task can be processed by either a CPU or a GPU, and the corresponding processing times are known. Our study extends a previous 4m/k−competitive4\sqrt{m/k}-competitive online algorithm [3], where m is the number of CPUs and k the number of GPUs (m≄k). We prove that no online algorithm can have a competitive ratio smaller than m/k\sqrt{m/k}. We also study how adding flexibility on task processing, such as task migration or spoliation, or increasing the knowledge of the scheduler by providing it with information on the task graph, influences the lower bound. We provide a (2m/k+1)(2\sqrt{m/k}+1)-competitive algorithm as well as a tunable combination of a system-oriented heuristic and a competitive algorithm; this combination performs well in practice and has a competitive ratio in Θ(m/k)Θ(\sqrt{m/k}). We extend our results to more types of processors. Finally, simulations on different sets of task graphs illustrate how the instance properties impact the performance of the studied algorithms and show that our proposed tunable algorithm performs the best among the online algorithms in almost all cases and has even performance close to an offline algorithm.Les plateformes de calcul modernes comportent souvent des accĂ©lĂ©rateurs. Nous nous intĂ©ressons au problĂšme d’ordonnancement d’applications modĂ©lisĂ©es par des graphes de tĂąches, sur de telles plateformes composĂ©es de deux types de processeurs, par exemple des CPU et des GPU. On considĂšre que les tĂąches sont dĂ©voilĂ©es dynamiquement, et que l’ordonnanceur ne connaĂźt que les tĂąches disponibles, i.e., les tĂąches dont les prĂ©decesseurs ont tous Ă©tĂ© exĂ©cutĂ©s. Chaque tĂąche peut ĂȘtre traitĂ©e soit par un CPU soit par un GPU, et les temps de calculs correspondants sont connus. Notre Ă©tude Ă©tend un prĂ©cĂ©dent algorithme online 4m/k4\sqrt{m/k}-competitif, oĂč m est le nombre de CPU et k le nombre de GPU (m≄k). Nous prouvons qu’aucun algorithme online ne peut avoir un facteur de compĂ©titivitĂ© plus petit que m/k\sqrt{m/k}. Nous Ă©tudions Ă©galement comment cette borne infĂ©rieure est influencĂ©e par l’ajout de flexibilitĂ© sur le traitement des tĂąches (migration ou spoliation) ou par une meilleure connaissance du graphe par l’ordonnanceur. Nous fournissons un algorithme (2m/k+1)(2\sqrt{m/k}+1)-compĂ©titif ainsi qu’une combinaison paramĂštrable avec un algorithme efficace en pratique qui permet d’obtenir un facteur de compĂ©titivitĂ© en Θ(m/k)Θ(\sqrt{m/k}). Nous Ă©tendons nos rĂ©sultats pour plus de deux types de processeurs. Enfin, des simulations sur plusieurs ensembles de graphes de tĂąches illustrent les performances des algorithmes Ă©tudiĂ©

    List and shelf schedules for independent parallel tasks to minimize the energy consumption with discrete or continuous speeds

    Get PDF
    International audienceScheduling independent tasks on a parallel platform is a widely-studied problem, in particular when the goal is to minimize the total execution time, or makespan (P||C_max problem in Graham's notations). Also, many applications do not consist of sequential tasks, but rather parallel tasks, either rigid, with a fixed degree of parallelism, or moldable, with a variable degree of parallelism (i.e., for which we can decide at the execution on how many processors they are executed). Furthermore, since the energy consumption of data centers is a growing concern, both from an environmental and economical point of view, minimizing the energy consumption of a schedule is a main challenge to be addressed. One can then decide, for each task, on how many processors it is executed, and at which speed the processors are operated, with the goal to minimize the total energy consumption. We further focus on co-schedules, where tasks are partitioned into shelves, and we prove that the problem of minimizing the energy consumption remains NP-complete when static energy is consumed during the whole duration of the application. We are however able to provide an optimal algorithm for the schedule within one shelf, i.e., for a set of tasks that start at the same time. Several approximation results are derived, both with discrete and continuous speed models, and extensive simulations are performed to show the performance of the proposed algorithms
    • 

    corecore