5,918 research outputs found
Stochastic scheduling on unrelated machines
Two important characteristics encountered in many real-world scheduling problems are heterogeneous machines/processors and a certain degree of uncertainty about the actual sizes of jobs. The first characteristic entails machine dependent processing times of jobs and is captured by the classical unrelated machine scheduling model.The second characteristic is adequately addressed by stochastic processing times of jobs as they are studied in classical stochastic scheduling models. While there is an extensive but separate literature for the two scheduling models, we study for the first time a combined model that takes both characteristics into account simultaneously. Here, the processing time of job on machine is governed by random variable , and its actual realization becomes known only upon job completion. With being the given weight of job , we study the classical objective to minimize the expected total weighted completion time , where is the completion time of job . By means of a novel time-indexed linear programming relaxation, we compute in polynomial time a scheduling policy with performance guarantee . Here, is arbitrarily small, and is an upper bound on the squared coefficient of variation of the processing times. We show that the dependence of the performance guarantee on is tight, as we obtain a lower bound for the type of policies that we use. When jobs also have individual release dates , our bound is . Via , currently best known bounds for deterministic scheduling are contained as a special case
Scheduling to Minimize Total Weighted Completion Time via Time-Indexed Linear Programming Relaxations
We study approximation algorithms for scheduling problems with the objective
of minimizing total weighted completion time, under identical and related
machine models with job precedence constraints. We give algorithms that improve
upon many previous 15 to 20-year-old state-of-art results. A major theme in
these results is the use of time-indexed linear programming relaxations. These
are natural relaxations for their respective problems, but surprisingly are not
studied in the literature.
We also consider the scheduling problem of minimizing total weighted
completion time on unrelated machines. The recent breakthrough result of
[Bansal-Srinivasan-Svensson, STOC 2016] gave a -approximation for the
problem, based on some lift-and-project SDP relaxation. Our main result is that
a -approximation can also be achieved using a natural and
considerably simpler time-indexed LP relaxation for the problem. We hope this
relaxation can provide new insights into the problem
SELFISHMIGRATE: A Scalable Algorithm for Non-clairvoyantly Scheduling Heterogeneous Processors
We consider the classical problem of minimizing the total weighted flow-time
for unrelated machines in the online \emph{non-clairvoyant} setting. In this
problem, a set of jobs arrive over time to be scheduled on a set of
machines. Each job has processing length , weight , and is
processed at a rate of when scheduled on machine . The online
scheduler knows the values of and upon arrival of the job,
but is not aware of the quantity . We present the {\em first} online
algorithm that is {\em scalable} ((1+\eps)-speed
-competitive for any constant \eps > 0) for the
total weighted flow-time objective. No non-trivial results were known for this
setting, except for the most basic case of identical machines. Our result
resolves a major open problem in online scheduling theory. Moreover, we also
show that no job needs more than a logarithmic number of migrations. We further
extend our result and give a scalable algorithm for the objective of minimizing
total weighted flow-time plus energy cost for the case of unrelated machines
and obtain a scalable algorithm. The key algorithmic idea is to let jobs
migrate selfishly until they converge to an equilibrium. Towards this end, we
define a game where each job's utility which is closely tied to the
instantaneous increase in the objective the job is responsible for, and each
machine declares a policy that assigns priorities to jobs based on when they
migrate to it, and the execution speeds. This has a spirit similar to
coordination mechanisms that attempt to achieve near optimum welfare in the
presence of selfish agents (jobs). To the best our knowledge, this is the first
work that demonstrates the usefulness of ideas from coordination mechanisms and
Nash equilibria for designing and analyzing online algorithms
The Quality of Equilibria for Set Packing Games
We introduce set packing games as an abstraction of situations in which
selfish players select subsets of a finite set of indivisible items, and
analyze the quality of several equilibria for this class of games. Assuming
that players are able to approximately play equilibrium strategies, we show
that the total quality of the resulting equilibrium solutions is only
moderately suboptimal. Our results are tight bounds on the price of anarchy for
three equilibrium concepts, namely Nash equilibria, subgame perfect equilibria,
and an equilibrium concept that we refer to as -collusion Nash equilibrium
Quantum-enhanced reinforcement learning for finite-episode games with discrete state spaces
Quantum annealing algorithms belong to the class of metaheuristic tools,
applicable for solving binary optimization problems. Hardware implementations
of quantum annealing, such as the quantum annealing machines produced by D-Wave
Systems, have been subject to multiple analyses in research, with the aim of
characterizing the technology's usefulness for optimization and sampling tasks.
Here, we present a way to partially embed both Monte Carlo policy iteration for
finding an optimal policy on random observations, as well as how to embed (n)
sub-optimal state-value functions for approximating an improved state-value
function given a policy for finite horizon games with discrete state spaces on
a D-Wave 2000Q quantum processing unit (QPU). We explain how both problems can
be expressed as a quadratic unconstrained binary optimization (QUBO) problem,
and show that quantum-enhanced Monte Carlo policy evaluation allows for finding
equivalent or better state-value functions for a given policy with the same
number episodes compared to a purely classical Monte Carlo algorithm.
Additionally, we describe a quantum-classical policy learning algorithm. Our
first and foremost aim is to explain how to represent and solve parts of these
problems with the help of the QPU, and not to prove supremacy over every
existing classical policy evaluation algorithm.Comment: 17 pages, 7 figure
- β¦