48 research outputs found

    Scalably Scheduling Processes with Arbitrary Speedup Curves

    Full text link

    SELFISHMIGRATE: A Scalable Algorithm for Non-clairvoyantly Scheduling Heterogeneous Processors

    Full text link
    We consider the classical problem of minimizing the total weighted flow-time for unrelated machines in the online \emph{non-clairvoyant} setting. In this problem, a set of jobs JJ arrive over time to be scheduled on a set of MM machines. Each job jj has processing length pjp_j, weight wjw_j, and is processed at a rate of ℓij\ell_{ij} when scheduled on machine ii. The online scheduler knows the values of wjw_j and ℓij\ell_{ij} upon arrival of the job, but is not aware of the quantity pjp_j. We present the {\em first} online algorithm that is {\em scalable} ((1+\eps)-speed O(1Ï”2)O(\frac{1}{\epsilon^2})-competitive for any constant \eps > 0) for the total weighted flow-time objective. No non-trivial results were known for this setting, except for the most basic case of identical machines. Our result resolves a major open problem in online scheduling theory. Moreover, we also show that no job needs more than a logarithmic number of migrations. We further extend our result and give a scalable algorithm for the objective of minimizing total weighted flow-time plus energy cost for the case of unrelated machines and obtain a scalable algorithm. The key algorithmic idea is to let jobs migrate selfishly until they converge to an equilibrium. Towards this end, we define a game where each job's utility which is closely tied to the instantaneous increase in the objective the job is responsible for, and each machine declares a policy that assigns priorities to jobs based on when they migrate to it, and the execution speeds. This has a spirit similar to coordination mechanisms that attempt to achieve near optimum welfare in the presence of selfish agents (jobs). To the best our knowledge, this is the first work that demonstrates the usefulness of ideas from coordination mechanisms and Nash equilibria for designing and analyzing online algorithms

    Towards Optimality in Parallel Scheduling

    Full text link
    To keep pace with Moore's law, chip designers have focused on increasing the number of cores per chip rather than single core performance. In turn, modern jobs are often designed to run on any number of cores. However, to effectively leverage these multi-core chips, one must address the question of how many cores to assign to each job. Given that jobs receive sublinear speedups from additional cores, there is an obvious tradeoff: allocating more cores to an individual job reduces the job's runtime, but in turn decreases the efficiency of the overall system. We ask how the system should schedule jobs across cores so as to minimize the mean response time over a stream of incoming jobs. To answer this question, we develop an analytical model of jobs running on a multi-core machine. We prove that EQUI, a policy which continuously divides cores evenly across jobs, is optimal when all jobs follow a single speedup curve and have exponentially distributed sizes. EQUI requires jobs to change their level of parallelization while they run. Since this is not possible for all workloads, we consider a class of "fixed-width" policies, which choose a single level of parallelization, k, to use for all jobs. We prove that, surprisingly, it is possible to achieve EQUI's performance without requiring jobs to change their levels of parallelization by using the optimal fixed level of parallelization, k*. We also show how to analytically derive the optimal k* as a function of the system load, the speedup curve, and the job size distribution. In the case where jobs may follow different speedup curves, finding a good scheduling policy is even more challenging. We find that policies like EQUI which performed well in the case of a single speedup function now perform poorly. We propose a very simple policy, GREEDY*, which performs near-optimally when compared to the numerically-derived optimal policy

    Ordonnancement non-clairvoyant: petites simpliïŹcations et amĂ©liorations de l'analyse de la famille d'algorithmes LAPSÎČ

    Get PDF
    International audienceEn 1999, Edmonds [Edmonds1999STOC] introduit un modĂšle trĂšs gĂ©nĂ©ral de tĂąches qui traversent diffĂ©rentes phases ayant diffĂ©rentes quantitĂ©s de travail et capacitĂ©s Ă  ĂȘtre parallĂ©lisĂ©es. La force du modĂšle d'Edmonds est qu'il dĂ©montra que mĂȘme si l'ordonnanceur ne connaĂźt strictement rien des caractĂ©ristiques des tĂąches qu'il est en train d'ordonnancer et est seulement informĂ© de leur arrivĂ©e Ă  leur arrivĂ©e et de leur complĂ©tion Ă  leur complĂ©tion, EQUI, qui partage de maniĂšre Ă©gale les processeurs entre les tĂąches actives, rĂ©ussit Ă  ĂȘtre compĂ©titif avec l'ordonnancement optimal hors-line clairvoyant, pour peu qu'EQUI dispose d'un peu plus de deux fois plus de ressources que l'optimum. Ceci signifie que l'ordonnanceur EQUI supporte sans diverger toute charge infĂ©rieure Ă  50%50\%. Nous [RobertSchabanel2008SODA] avons par la suite Ă©tendu l'analyse d'Edmonds au cas oĂč les tĂąches sont composĂ©es d'un DAG de processus traversant des phases arbitraires et dĂ©montrĂ© que l'algorithme non-clairvoyant EQUIoEQUI supporte dans ce cas Ă©galement toute charge infĂ©rieure Ă  50%. En 2009, Edmonds et Pruhs [EdmondsPruhs2009SODA] ont proposĂ© une nouvelle famille d'algorithmes LAPS_b, avec 00

    Energy-Efficient Multiprocessor Scheduling for Flow Time and Makespan

    Full text link
    We consider energy-efficient scheduling on multiprocessors, where the speed of each processor can be individually scaled, and a processor consumes power sαs^{\alpha} when running at speed ss, for α>1\alpha>1. A scheduling algorithm needs to decide at any time both processor allocations and processor speeds for a set of parallel jobs with time-varying parallelism. The objective is to minimize the sum of the total energy consumption and certain performance metric, which in this paper includes total flow time and makespan. For both objectives, we present instantaneous parallelism clairvoyant (IP-clairvoyant) algorithms that are aware of the instantaneous parallelism of the jobs at any time but not their future characteristics, such as remaining parallelism and work. For total flow time plus energy, we present an O(1)O(1)-competitive algorithm, which significantly improves upon the best known non-clairvoyant algorithm and is the first constant competitive result on multiprocessor speed scaling for parallel jobs. In the case of makespan plus energy, which is considered for the first time in the literature, we present an O(ln⁥1−1/αP)O(\ln^{1-1/\alpha}P)-competitive algorithm, where PP is the total number of processors. We show that this algorithm is asymptotically optimal by providing a matching lower bound. In addition, we also study non-clairvoyant scheduling for total flow time plus energy, and present an algorithm that achieves O(ln⁥P)O(\ln P)-competitive for jobs with arbitrary release time and O(ln⁥1/αP)O(\ln^{1/\alpha}P)-competitive for jobs with identical release time. Finally, we prove an Ω(ln⁥1/αP)\Omega(\ln^{1/\alpha}P) lower bound on the competitive ratio of any non-clairvoyant algorithm, matching the upper bound of our algorithm for jobs with identical release time
    corecore