5,918 research outputs found

    Stochastic scheduling on unrelated machines

    Get PDF
    Two important characteristics encountered in many real-world scheduling problems are heterogeneous machines/processors and a certain degree of uncertainty about the actual sizes of jobs. The first characteristic entails machine dependent processing times of jobs and is captured by the classical unrelated machine scheduling model.The second characteristic is adequately addressed by stochastic processing times of jobs as they are studied in classical stochastic scheduling models. While there is an extensive but separate literature for the two scheduling models, we study for the first time a combined model that takes both characteristics into account simultaneously. Here, the processing time of job jj on machine ii is governed by random variable PijP_{ij}, and its actual realization becomes known only upon job completion. With wjw_j being the given weight of job jj, we study the classical objective to minimize the expected total weighted completion time E[βˆ‘jwjCj]E[\sum_j w_jC_j], where CjC_j is the completion time of job jj. By means of a novel time-indexed linear programming relaxation, we compute in polynomial time a scheduling policy with performance guarantee (3+Ξ”)/2+Ο΅(3+\Delta)/2+\epsilon. Here, Ο΅>0\epsilon>0 is arbitrarily small, and Ξ”\Delta is an upper bound on the squared coefficient of variation of the processing times. We show that the dependence of the performance guarantee on Ξ”\Delta is tight, as we obtain a Ξ”/2\Delta/2 lower bound for the type of policies that we use. When jobs also have individual release dates rijr_{ij}, our bound is (2+Ξ”)+Ο΅(2+\Delta)+\epsilon. Via Ξ”=0\Delta=0, currently best known bounds for deterministic scheduling are contained as a special case

    Scheduling to Minimize Total Weighted Completion Time via Time-Indexed Linear Programming Relaxations

    Full text link
    We study approximation algorithms for scheduling problems with the objective of minimizing total weighted completion time, under identical and related machine models with job precedence constraints. We give algorithms that improve upon many previous 15 to 20-year-old state-of-art results. A major theme in these results is the use of time-indexed linear programming relaxations. These are natural relaxations for their respective problems, but surprisingly are not studied in the literature. We also consider the scheduling problem of minimizing total weighted completion time on unrelated machines. The recent breakthrough result of [Bansal-Srinivasan-Svensson, STOC 2016] gave a (1.5βˆ’c)(1.5-c)-approximation for the problem, based on some lift-and-project SDP relaxation. Our main result is that a (1.5βˆ’c)(1.5 - c)-approximation can also be achieved using a natural and considerably simpler time-indexed LP relaxation for the problem. We hope this relaxation can provide new insights into the problem

    SELFISHMIGRATE: A Scalable Algorithm for Non-clairvoyantly Scheduling Heterogeneous Processors

    Full text link
    We consider the classical problem of minimizing the total weighted flow-time for unrelated machines in the online \emph{non-clairvoyant} setting. In this problem, a set of jobs JJ arrive over time to be scheduled on a set of MM machines. Each job jj has processing length pjp_j, weight wjw_j, and is processed at a rate of β„“ij\ell_{ij} when scheduled on machine ii. The online scheduler knows the values of wjw_j and β„“ij\ell_{ij} upon arrival of the job, but is not aware of the quantity pjp_j. We present the {\em first} online algorithm that is {\em scalable} ((1+\eps)-speed O(1Ο΅2)O(\frac{1}{\epsilon^2})-competitive for any constant \eps > 0) for the total weighted flow-time objective. No non-trivial results were known for this setting, except for the most basic case of identical machines. Our result resolves a major open problem in online scheduling theory. Moreover, we also show that no job needs more than a logarithmic number of migrations. We further extend our result and give a scalable algorithm for the objective of minimizing total weighted flow-time plus energy cost for the case of unrelated machines and obtain a scalable algorithm. The key algorithmic idea is to let jobs migrate selfishly until they converge to an equilibrium. Towards this end, we define a game where each job's utility which is closely tied to the instantaneous increase in the objective the job is responsible for, and each machine declares a policy that assigns priorities to jobs based on when they migrate to it, and the execution speeds. This has a spirit similar to coordination mechanisms that attempt to achieve near optimum welfare in the presence of selfish agents (jobs). To the best our knowledge, this is the first work that demonstrates the usefulness of ideas from coordination mechanisms and Nash equilibria for designing and analyzing online algorithms

    The Quality of Equilibria for Set Packing Games

    Get PDF
    We introduce set packing games as an abstraction of situations in which nn selfish players select subsets of a finite set of indivisible items, and analyze the quality of several equilibria for this class of games. Assuming that players are able to approximately play equilibrium strategies, we show that the total quality of the resulting equilibrium solutions is only moderately suboptimal. Our results are tight bounds on the price of anarchy for three equilibrium concepts, namely Nash equilibria, subgame perfect equilibria, and an equilibrium concept that we refer to as kk-collusion Nash equilibrium

    Quantum-enhanced reinforcement learning for finite-episode games with discrete state spaces

    Full text link
    Quantum annealing algorithms belong to the class of metaheuristic tools, applicable for solving binary optimization problems. Hardware implementations of quantum annealing, such as the quantum annealing machines produced by D-Wave Systems, have been subject to multiple analyses in research, with the aim of characterizing the technology's usefulness for optimization and sampling tasks. Here, we present a way to partially embed both Monte Carlo policy iteration for finding an optimal policy on random observations, as well as how to embed (n) sub-optimal state-value functions for approximating an improved state-value function given a policy for finite horizon games with discrete state spaces on a D-Wave 2000Q quantum processing unit (QPU). We explain how both problems can be expressed as a quadratic unconstrained binary optimization (QUBO) problem, and show that quantum-enhanced Monte Carlo policy evaluation allows for finding equivalent or better state-value functions for a given policy with the same number episodes compared to a purely classical Monte Carlo algorithm. Additionally, we describe a quantum-classical policy learning algorithm. Our first and foremost aim is to explain how to represent and solve parts of these problems with the help of the QPU, and not to prove supremacy over every existing classical policy evaluation algorithm.Comment: 17 pages, 7 figure
    • …
    corecore