231 research outputs found

    Move-optimal schedules for parallel machines to minimize total weighted completion time

    Get PDF
    We study the minimum total weighted completion time problem on identical machines, which is known to be strongly NP\mathcal{NP}-hard. We analyze a simple local search heuristic, moving jobs from one machine to another. The local optima can be shown to be approximately optimal with approximation ratio 1.51.5. In case all jobs have equal Smith ratios, the approximation ratio is at most 1.0921.092

    Lower bounds for Smith's rule in stochastic machine scheduling

    Get PDF
    We consider the problem to minimize the weighted sum of completion times in nonpreemptive parallel machine scheduling. In a landmark paper from 1986, Kawaguchi and Kyan [5] showed that scheduling the jobs according to the WSPT rule -also known as Smith's rule- has a performance guarantee of 12(1+2)ā‰ˆ1.207{1\over 2}(1+\sqrt{2}) \approx 1.207. They also gave an instance to show that this bound is tight. We consider the stochastic variant of this problem in which the processing times are exponentially distributed random variables. We show,somehow counterintuitively, that the performance guarantee of the WSEPT rule, the stochastic analogue of WSPT, is not better than 1.229. This constitutes the first lower bound for WSEPT in this setting, and in particular, it shows that even with exponentially distributed processing times, stochastic scheduling has somewhat nastier worst-case examples than deterministic scheduling. In that respect, our analysis sheds new light on the fundamental differences between deterministic and stochastic scheduling

    Scheduling Distributed Clusters of Parallel Machines: Primal-Dual and LP-based Approximation Algorithms

    Get PDF
    The Map-Reduce computing framework rose to prominence with datasets of such size that dozens of machines on a single cluster were needed for individual jobs. As datasets approach the exabyte scale, a single job may need distributed processing not only on multiple machines, but on multiple clusters. We consider a scheduling problem to minimize weighted average completion time of n jobs on m distributed clusters of parallel machines. In keeping with the scale of the problems motivating this work, we assume that (1) each job is divided into m "subjobs" and (2) distinct subjobs of a given job may be processed concurrently. When each cluster is a single machine, this is the NP-Hard concurrent open shop problem. A clear limitation of such a model is that a serial processing assumption sidesteps the issue of how different tasks of a given subjob might be processed in parallel. Our algorithms explicitly model clusters as pools of resources and effectively overcome this issue. Under a variety of parameter settings, we develop two constant factor approximation algorithms for this problem. The first algorithm uses an LP relaxation tailored to this problem from prior work. This LP-based algorithm provides strong performance guarantees. Our second algorithm exploits a surprisingly simple mapping to the special case of one machine per cluster. This mapping-based algorithm is combinatorial and extremely fast. These are the first constant factor approximations for this problem

    The Expected Competitive Ratio for Weighted Completion Time Scheduling

    Get PDF
    A set of n independent jobs is to be scheduled without preemption on m identical parallel machines. For each job j, a diffuse adversary chooses the distribution Fj of the random processing time Pj from a certain class of distributions Fj. The scheduler is given the expectation Ī¼j = E[Pj], but the actual duration is not known in advance. A positive weight wj is associated with each job j and all jobs are ready for execution at time zero. The scheduler determines a list of the jobs, which is then scheduled in a non-preemptive manner. The objective is to minimise the total weighted completion time āˆ‘j wj Cj. The performance of an algorithm is measured with respect to the expected competitive ratio maxF āˆˆ F E[āˆ‘j wj Cj/OPT], where Cj denotes the completion time of job j and OPT the offline optimum value. We show a general bound on the expected competitive ratio for list scheduling algorithms, which holds for a class of so-called new-better-than-used processing time distributions. This class includes, among others, the exponential distribution. As a special case, we consider the popular rule weighted shortest expected processing time first (WSEPT) in which jobs are processed according to the non-decreasing Ī¼j/wj ratio. We show that it achieves E[WSEPT/OPT] ā‰¤ 3 - 1/m for exponential distributed processing time

    Scheduling with processing set restrictions : a survey

    Get PDF
    2008-2009 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Existence Theorems for Scheduling to Meet Two Objectives

    Get PDF
    We will look at the existence of schedules which are simultaneously near-optimal for two criteria. First,we will present some techniques for proving existence theorems,in a very general setting,for bicriterion scheduling problems. We will then use these techniques to prove existence theorems for a large class of problems. We will consider the relationship between objective functions based on completion time,flow time,lateness and the number of on-time jobs. We will also present negative results first for the problem of simultaneously minimizing the maximum flow time and average weighted flow time and second for minimizing the maximum flow time and simultaneously maximizing the number of on-time jobs. In some cases we will also present lower bounds and algorithms that approach our bicriterion existence theorems. Finally we will improve upon our general existence results in one more specific environment

    Scheduling identical parallel machines to minimize total weighted completion time

    Get PDF
    AbstractA branch and bound algorithm is proposed for the problem of scheduling jobs on identical parallel machines to minimize the total weighted completion time. Based upon a formulation which partitions the period of processing into unit time intervals, the lower bounding scheme is derived by performing a Lagrangean relaxation of the machine capacity constraints. A special feature is that the multipliers are obtained by a simple heuristic method which allows each lower bound to be computed in polynomial time. This bounding scheme, along with a new dominance rule, is incorporated into a branch and bound algorithm. Computational experience indicates that it is superior to known algorithms
    • ā€¦
    corecore