2,148 research outputs found

    On the Configuration-LP for Scheduling on Unrelated Machines

    Get PDF
    One of the most important open problems in machine scheduling is the problem of scheduling a set of jobs on unrelated machines to minimize the makespan. The best known approximation algorithm for this problem guarantees an approximation factor of 2. It is known to be NP-hard to approximate with a better ratio than 3/2. Closing this gap has been open for over 20 years. The best known approximation factors are achieved by LP-based algorithms. The strongest known linear program formulation for the problem is the configuration-LP. We show that the configuration-LP has an integrality gap of 2 even for the special case of unrelated graph balancing, where each job can be assigned to at most two machines. In particular, our result implies that a large family of cuts does not help to diminish the integrality gap of the canonical assignment-LP. Also, we present cases of the problem which can be approximated with a better factor than 2. They constitute valuable insights for constructing an NP-hardness reduction which improves the known lowerbound. Very recently Svensson [22] studied the restricted assignment case, where each job can only be assigned to a given set of machines on which it has the same processing time. He shows that in this setting the configuration-LP has an integrality gap of 33/17≈1.94. Hence, our result imply that the unrelated graph balancing case is significantly more complex than the restricted assignment case. Then we turn to another objective function: maximizing the minimum machine load. For the case that every job can be assigned to at most two machines we give a purely combinatorial 2-approximation which is best possible, unless P=NP. This improves on the computationally costly LP-based (2 +ε)-approximation algorithm by Chakrabarty et al. [7]

    Compact LP Relaxations for Allocation Problems

    Get PDF
    We consider the restricted versions of Scheduling on Unrelated Machines and the Santa Claus problem. In these problems we are given a set of jobs and a set of machines. Every job j has a size p_j and a set of allowed machines Gamma(j), i.e., it can only be assigned to those machines. In the first problem, the objective is to minimize the maximum load among all machines; in the latter problem it is to maximize the minimum load. For these problems, the strongest LP relaxation known is the configuration LP. The configuration LP has an exponential number of variables and it cannot be solved exactly unless P = NP. Our main result is a new LP relaxation for these problems. This LP has only O(n^3) variables and constraints. It is a further relaxation of the configuration LP, but it obeys the best bounds known for its integrality gap (11/6 and 4). For the configuration LP these bounds were obtained using two local search algorithm. These algorithms, however, differ significantly in presentation. In this paper, we give a meta algorithm based on the local search ideas. With an instantiation for each objective function, we prove the bounds for the new compact LP relaxation (in particular, for the configuration LP). This way, we bring out many analogies between the two proofs, which were not apparent before

    Estimating The Makespan of The Two-Valued Restricted Assignment Problem

    Get PDF
    We consider a special case of the scheduling problem on unrelated machines,namely the Restricted Assignment Problem with two different processing times.We show that the configuration LP has an integrality gap of at most~53+11561.6731\frac{5}{3} + \frac{1}{156} \approx 1.6731 for this problem. This allows us to estimate the optimal makespan within a factor of~53+1156\frac{5}{3} + \frac{1}{156},improving upon the previously best known estimation algorithm with ratio~\frac{11}{6} \approx \numprint{1.833} due to Chakrabarty, Khanna, and Li \cite{CKL15}

    Scheduling Kernels via Configuration LP

    Get PDF
    Makespan minimization (on parallel identical or unrelated machines) is arguably the most natural and studied scheduling problem. A common approach in practical algorithm design is to reduce the size of a given instance by a fast preprocessing step while being able to recover key information even after this reduction. This notion is formally studied as kernelization (or simply, kernel) - a polynomial time procedure which yields an equivalent instance whose size is bounded in terms of some given parameter. It follows from known results that makespan minimization parameterized by the longest job processing time p_max has a kernelization yielding a reduced instance whose size is exponential in p_max. Can this be reduced to polynomial in p_max? We answer this affirmatively not only for makespan minimization, but also for the (more complicated) objective of minimizing the weighted sum of completion times, also in the setting of unrelated machines when the number of machine kinds is a parameter. Our algorithm first solves the Configuration LP and based on its solution constructs a solution of an intermediate problem, called huge N-fold integer programming. This solution is further reduced in size by a series of steps, until its encoding length is polynomial in the parameters. Then, we show that huge N-fold IP is in NP, which implies that there is a polynomial reduction back to our scheduling problem, yielding a kernel. Our technique is highly novel in the context of kernelization, and our structural theorem about the Configuration LP is of independent interest. Moreover, we show a polynomial kernel for huge N-fold IP conditional on whether the so-called separation subproblem can be solved in polynomial time. Considering that integer programming does not admit polynomial kernels except for quite restricted cases, our "conditional kernel" provides new insight

    Energy Efficient Scheduling via Partial Shutdown

    Get PDF
    Motivated by issues of saving energy in data centers we define a collection of new problems referred to as "machine activation" problems. The central framework we introduce considers a collection of mm machines (unrelated or related) with each machine ii having an {\em activation cost} of aia_i. There is also a collection of nn jobs that need to be performed, and pi,jp_{i,j} is the processing time of job jj on machine ii. We assume that there is an activation cost budget of AA -- we would like to {\em select} a subset SS of the machines to activate with total cost a(S)Aa(S) \le A and {\em find} a schedule for the nn jobs on the machines in SS minimizing the makespan (or any other metric). For the general unrelated machine activation problem, our main results are that if there is a schedule with makespan TT and activation cost AA then we can obtain a schedule with makespan \makespanconstant T and activation cost \costconstant A, for any ϵ>0\epsilon >0. We also consider assignment costs for jobs as in the generalized assignment problem, and using our framework, provide algorithms that minimize the machine activation and the assignment cost simultaneously. In addition, we present a greedy algorithm which only works for the basic version and yields a makespan of 2T2T and an activation cost A(1+lnn)A (1+\ln n). For the uniformly related parallel machine scheduling problem, we develop a polynomial time approximation scheme that outputs a schedule with the property that the activation cost of the subset of machines is at most AA and the makespan is at most (1+ϵ)T(1+\epsilon) T for any ϵ>0\epsilon >0

    Better Unrelated Machine Scheduling for Weighted Completion Time via Random Offsets from Non-Uniform Distributions

    Full text link
    In this paper we consider the classic scheduling problem of minimizing total weighted completion time on unrelated machines when jobs have release times, i.e, RrijjwjCjR | r_{ij} | \sum_j w_j C_j using the three-field notation. For this problem, a 2-approximation is known based on a novel convex programming (J. ACM 2001 by Skutella). It has been a long standing open problem if one can improve upon this 2-approximation (Open Problem 8 in J. of Sched. 1999 by Schuurman and Woeginger). We answer this question in the affirmative by giving a 1.8786-approximation. We achieve this via a surprisingly simple linear programming, but a novel rounding algorithm and analysis. A key ingredient of our algorithm is the use of random offsets sampled from non-uniform distributions. We also consider the preemptive version of the problem, i.e, Rrij,pmtnjwjCjR | r_{ij},pmtn | \sum_j w_j C_j. We again use the idea of sampling offsets from non-uniform distributions to give the first better than 2-approximation for this problem. This improvement also requires use of a configuration LP with variables for each job's complete schedules along with more careful analysis. For both non-preemptive and preemptive versions, we break the approximation barrier of 2 for the first time.Comment: 24 pages. To apper in FOCS 201

    Scheduling to Minimize Total Weighted Completion Time via Time-Indexed Linear Programming Relaxations

    Full text link
    We study approximation algorithms for scheduling problems with the objective of minimizing total weighted completion time, under identical and related machine models with job precedence constraints. We give algorithms that improve upon many previous 15 to 20-year-old state-of-art results. A major theme in these results is the use of time-indexed linear programming relaxations. These are natural relaxations for their respective problems, but surprisingly are not studied in the literature. We also consider the scheduling problem of minimizing total weighted completion time on unrelated machines. The recent breakthrough result of [Bansal-Srinivasan-Svensson, STOC 2016] gave a (1.5c)(1.5-c)-approximation for the problem, based on some lift-and-project SDP relaxation. Our main result is that a (1.5c)(1.5 - c)-approximation can also be achieved using a natural and considerably simpler time-indexed LP relaxation for the problem. We hope this relaxation can provide new insights into the problem
    corecore