612 research outputs found

    Malleable Scheduling Beyond Identical Machines

    Get PDF
    In malleable job scheduling, jobs can be executed simultaneously on multiple machines with the processing time depending on the number of allocated machines. Jobs are required to be executed non-preemptively and in unison, in the sense that they occupy, during their execution, the same time interval over all the machines of the allocated set. In this work, we study generalizations of malleable job scheduling inspired by standard scheduling on unrelated machines. Specifically, we introduce a general model of malleable job scheduling, where each machine has a (possibly different) speed for each job, and the processing time of a job j on a set of allocated machines S depends on the total speed of S for j. For machines with unrelated speeds, we show that the optimal makespan cannot be approximated within a factor less than e/(e-1), unless P = NP. On the positive side, we present polynomial-time algorithms with approximation ratios 2e/(e-1) for machines with unrelated speeds, 3 for machines with uniform speeds, and 7/3 for restricted assignments on identical machines. Our algorithms are based on deterministic LP rounding and result in sparse schedules, in the sense that each machine shares at most one job with other machines. We also prove lower bounds on the integrality gap of 1+phi for unrelated speeds (phi is the golden ratio) and 2 for uniform speeds and restricted assignments. To indicate the generality of our approach, we show that it also yields constant factor approximation algorithms (i) for minimizing the sum of weighted completion times; and (ii) a variant where we determine the effective speed of a set of allocated machines based on the L_p norm of their speeds

    Efficient Algorithms for Scheduling Moldable Tasks

    Full text link
    We study the problem of scheduling nn independent moldable tasks on mm processors that arises in large-scale parallel computations. When tasks are monotonic, the best known result is a (32+Ļµ)(\frac{3}{2}+\epsilon)-approximation algorithm for makespan minimization with a complexity linear in nn and polynomial in logā”m\log{m} and 1Ļµ\frac{1}{\epsilon} where Ļµ\epsilon is arbitrarily small. We propose a new perspective of the existing speedup models: the speedup of a task TjT_{j} is linear when the number pp of assigned processors is small (up to a threshold Ī“j\delta_{j}) while it presents monotonicity when pp ranges in [Ī“j,kj][\delta_{j}, k_{j}]; the bound kjk_{j} indicates an unacceptable overhead when parallelizing on too many processors. For a given integer Ī“ā‰„5\delta\geq 5, let u=āŒˆĪ“2āŒ‰āˆ’1u=\left\lceil \sqrt[2]{\delta} \right\rceil-1. In this paper, we propose a 1Īø(Ī“)(1+Ļµ)\frac{1}{\theta(\delta)} (1+\epsilon)-approximation algorithm for makespan minimization with a complexity O(nlogā”nĻµlogā”m)\mathcal{O}(n\log{\frac{n}{\epsilon}}\log{m}) where Īø(Ī“)=u+1u+2(1āˆ’km)\theta(\delta) = \frac{u+1}{u+2}\left( 1- \frac{k}{m} \right) (mā‰«km\gg k). As a by-product, we also propose a Īø(Ī“)\theta(\delta)-approximation algorithm for throughput maximization with a common deadline with a complexity O(n2logā”m)\mathcal{O}(n^{2}\log{m})

    Scheduling Monotone Moldable Jobs in Linear Time

    Full text link
    A moldable job is a job that can be executed on an arbitrary number of processors, and whose processing time depends on the number of processors allotted to it. A moldable job is monotone if its work doesn't decrease for an increasing number of allotted processors. We consider the problem of scheduling monotone moldable jobs to minimize the makespan. We argue that for certain compact input encodings a polynomial algorithm has a running time polynomial in n and log(m), where n is the number of jobs and m is the number of machines. We describe how monotony of jobs can be used to counteract the increased problem complexity that arises from compact encodings, and give tight bounds on the approximability of the problem with compact encoding: it is NP-hard to solve optimally, but admits a PTAS. The main focus of this work are efficient approximation algorithms. We describe different techniques to exploit the monotony of the jobs for better running times, and present a (3/2+{\epsilon})-approximate algorithm whose running time is polynomial in log(m) and 1/{\epsilon}, and only linear in the number n of jobs

    Split Scheduling with Uniform Setup Times

    Full text link
    We study a scheduling problem in which jobs may be split into parts, where the parts of a split job may be processed simultaneously on more than one machine. Each part of a job requires a setup time, however, on the machine where the job part is processed. During setup a machine cannot process or set up any other job. We concentrate on the basic case in which setup times are job-, machine-, and sequence-independent. Problems of this kind were encountered when modelling practical problems in planning disaster relief operations. Our main algorithmic result is a polynomial-time algorithm for minimising total completion time on two parallel identical machines. We argue why the same problem with three machines is not an easy extension of the two-machine case, leaving the complexity of this case as a tantalising open problem. We give a constant-factor approximation algorithm for the general case with any number of machines and a polynomial-time approximation scheme for a fixed number of machines. For the version with objective minimising weighted total completion time we prove NP-hardness. Finally, we conclude with an overview of the state of the art for other split scheduling problems with job-, machine-, and sequence-independent setup times

    Efficient Parallel Scheduling of Malleable Tasks

    Get PDF

    Power Strip Packing of Malleable Demands in Smart Grid

    Full text link
    We consider a problem of supplying electricity to a set of N\mathcal{N} customers in a smart-grid framework. Each customer requires a certain amount of electrical energy which has to be supplied during the time interval [0,1][0,1]. We assume that each demand has to be supplied without interruption, with possible duration between ā„“\ell and rr, which are given system parameters (ā„“ā‰¤r\ell\le r). At each moment of time, the power of the grid is the sum of all the consumption rates for the demands being supplied at that moment. Our goal is to find an assignment that minimizes the {\it power peak} - maximal power over [0,1][0,1] - while satisfying all the demands. To do this first we find the lower bound of optimal power peak. We show that the problem depends on whether or not the pair ā„“,r\ell, r belongs to a "good" region G\mathcal{G}. If it does - then an optimal assignment almost perfectly "fills" the rectangle timeƗpower=[0,1]Ɨ[0,A]time \times power = [0,1] \times [0, A] with AA being the sum of all the energy demands - thus achieving an optimal power peak AA. Conversely, if ā„“,r\ell, r do not belong to G\mathcal{G}, we identify the lower bound AĖ‰>A\bar{A} >A on the optimal value of power peak and introduce a simple linear time algorithm that almost perfectly arranges all the demands in a rectangle [0,A/AĖ‰]Ɨ[0,AĖ‰][0, A /\bar{A}] \times [0, \bar{A}] and show that it is asymptotically optimal

    Machine Scheduling with Resource Dependent Processing Times

    Get PDF
    We consider several parallel machine scheduling settings with the objective to minimize the schedule makespan. The most general of these settings is unrelated parallel machine scheduling. We assume that, in addition to its machine dependence, the processing time of any job is dependent on the usage of a scarce renewable resource. A given amount of that resource, e.g. workers, can be distributed over the jobs in process at any time, and the more of that resource is allocated to a job, the smaller is its processing time. This model generalizes classical machine scheduling problems, adding a time-resource tradeoff. It is also a natural variant of a generalized assignment problem studied previously by Shmoys and Tardos. On the basis of integer programming formulations for relaxations of the respective problems, we use LP rounding techniques to allocate resources to jobs, and to assign jobs to machines. Combined with Graham''s list scheduling, we thus prove the existence of constant factor approximation algorithms. Our performance guarantee is 6.83 for the most general case of unrelated parallel machine scheduling. We improve this bound for two special cases, namely to 5.83 whenever the jobs are assigned to machines beforehand, and to (5+e), e>0, whenever the processing times do not depend on the machine. Moreover, we discuss tightness of the relaxations, and derive inapproximability results.operations research and management science;

    A (3/2+ɛ) approximation algorithm for scheduling malleable and non-malleable parallel tasks

    Get PDF
    In this paper we study a scheduling problem with malleable and non-malleable parallel tasks on mm processors. A non-malleable parallel task is one that runs in parallel on a specific given number of processors. The goal is to find a non-preemptive schedule on the mm processors which minimizes the makespan, or the latest task completion time. The previous best result is the list scheduling algorithm with an absolute approximation ratio of 22. On the other hand, there does not exist an approximation algorithm for scheduling non-malleable parallel tasks with ratio smaller than 1.51.5, unless P=NPP=NP. In this paper we show that a schedule with length (1.5+Ļµ)OPT(1.5 +\epsilon) OPT can be computed for the scheduling problem in time O(nlogā”n)+f(1/Ļµ)O(n \log n) + f(1/\epsilon). Furthermore we present an (1.5+Ļµ)(1.5 + \epsilon) approximation algorithm for scheduling malleable parallel tasks. Finally, we show how to extend our algorithms to the variant with additional release dates
    • ā€¦
    corecore