3,393 research outputs found
SELFISHMIGRATE: A Scalable Algorithm for Non-clairvoyantly Scheduling Heterogeneous Processors
We consider the classical problem of minimizing the total weighted flow-time
for unrelated machines in the online \emph{non-clairvoyant} setting. In this
problem, a set of jobs arrive over time to be scheduled on a set of
machines. Each job has processing length , weight , and is
processed at a rate of when scheduled on machine . The online
scheduler knows the values of and upon arrival of the job,
but is not aware of the quantity . We present the {\em first} online
algorithm that is {\em scalable} ((1+\eps)-speed
-competitive for any constant \eps > 0) for the
total weighted flow-time objective. No non-trivial results were known for this
setting, except for the most basic case of identical machines. Our result
resolves a major open problem in online scheduling theory. Moreover, we also
show that no job needs more than a logarithmic number of migrations. We further
extend our result and give a scalable algorithm for the objective of minimizing
total weighted flow-time plus energy cost for the case of unrelated machines
and obtain a scalable algorithm. The key algorithmic idea is to let jobs
migrate selfishly until they converge to an equilibrium. Towards this end, we
define a game where each job's utility which is closely tied to the
instantaneous increase in the objective the job is responsible for, and each
machine declares a policy that assigns priorities to jobs based on when they
migrate to it, and the execution speeds. This has a spirit similar to
coordination mechanisms that attempt to achieve near optimum welfare in the
presence of selfish agents (jobs). To the best our knowledge, this is the first
work that demonstrates the usefulness of ideas from coordination mechanisms and
Nash equilibria for designing and analyzing online algorithms
Scheduling MapReduce Jobs under Multi-Round Precedences
We consider non-preemptive scheduling of MapReduce jobs with multiple tasks
in the practical scenario where each job requires several map-reduce rounds. We
seek to minimize the average weighted completion time and consider scheduling
on identical and unrelated parallel processors. For identical processors, we
present LP-based O(1)-approximation algorithms. For unrelated processors, the
approximation ratio naturally depends on the maximum number of rounds of any
job. Since the number of rounds per job in typical MapReduce algorithms is a
small constant, our scheduling algorithms achieve a small approximation ratio
in practice. For the single-round case, we substantially improve on previously
best known approximation guarantees for both identical and unrelated
processors. Moreover, we conduct an experimental analysis and compare the
performance of our algorithms against a fast heuristic and a lower bound on the
optimal solution, thus demonstrating their promising practical performance
Energy-Efficient Multiprocessor Scheduling for Flow Time and Makespan
We consider energy-efficient scheduling on multiprocessors, where the speed
of each processor can be individually scaled, and a processor consumes power
when running at speed , for . A scheduling algorithm
needs to decide at any time both processor allocations and processor speeds for
a set of parallel jobs with time-varying parallelism. The objective is to
minimize the sum of the total energy consumption and certain performance
metric, which in this paper includes total flow time and makespan. For both
objectives, we present instantaneous parallelism clairvoyant (IP-clairvoyant)
algorithms that are aware of the instantaneous parallelism of the jobs at any
time but not their future characteristics, such as remaining parallelism and
work. For total flow time plus energy, we present an -competitive
algorithm, which significantly improves upon the best known non-clairvoyant
algorithm and is the first constant competitive result on multiprocessor speed
scaling for parallel jobs. In the case of makespan plus energy, which is
considered for the first time in the literature, we present an
-competitive algorithm, where is the total number of
processors. We show that this algorithm is asymptotically optimal by providing
a matching lower bound. In addition, we also study non-clairvoyant scheduling
for total flow time plus energy, and present an algorithm that achieves -competitive for jobs with arbitrary release time and
-competitive for jobs with identical release time. Finally,
we prove an lower bound on the competitive ratio of
any non-clairvoyant algorithm, matching the upper bound of our algorithm for
jobs with identical release time
Profitable Scheduling on Multiple Speed-Scalable Processors
We present a new online algorithm for profit-oriented scheduling on multiple
speed-scalable processors. Moreover, we provide a tight analysis of the
algorithm's competitiveness. Our results generalize and improve upon work by
\textcite{Chan:2010}, which considers a single speed-scalable processor. Using
significantly different techniques, we can not only extend their model to
multiprocessors but also prove an enhanced and tight competitive ratio for our
algorithm.
In our scheduling problem, jobs arrive over time and are preemptable. They
have different workloads, values, and deadlines. The scheduler may decide not
to finish a job but instead to suffer a loss equaling the job's value. However,
to process a job's workload until its deadline the scheduler must invest a
certain amount of energy. The cost of a schedule is the sum of lost values and
invested energy. In order to finish a job the scheduler has to determine which
processors to use and set their speeds accordingly. A processor's energy
consumption is power \Power{s} integrated over time, where
\Power{s}=s^{\alpha} is the power consumption when running at speed .
Since we consider the online variant of the problem, the scheduler has no
knowledge about future jobs. This problem was introduced by
\textcite{Chan:2010} for the case of a single processor. They presented an
online algorithm which is -competitive. We provide an
online algorithm for the case of multiple processors with an improved
competitive ratio of .Comment: Extended abstract submitted to STACS 201
Energy Efficient Scheduling and Routing via Randomized Rounding
We propose a unifying framework based on configuration linear programs and
randomized rounding, for different energy optimization problems in the dynamic
speed-scaling setting. We apply our framework to various scheduling and routing
problems in heterogeneous computing and networking environments. We first
consider the energy minimization problem of scheduling a set of jobs on a set
of parallel speed scalable processors in a fully heterogeneous setting. For
both the preemptive-non-migratory and the preemptive-migratory variants, our
approach allows us to obtain solutions of almost the same quality as for the
homogeneous environment. By exploiting the result for the
preemptive-non-migratory variant, we are able to improve the best known
approximation ratio for the single processor non-preemptive problem.
Furthermore, we show that our approach allows to obtain a constant-factor
approximation algorithm for the power-aware preemptive job shop scheduling
problem. Finally, we consider the min-power routing problem where we are given
a network modeled by an undirected graph and a set of uniform demands that have
to be routed on integral routes from their sources to their destinations so
that the energy consumption is minimized. We improve the best known
approximation ratio for this problem.Comment: 27 page
Scheduling Algorithms for Procrastinators
This paper presents scheduling algorithms for procrastinators, where the
speed that a procrastinator executes a job increases as the due date
approaches. We give optimal off-line scheduling policies for linearly
increasing speed functions. We then explain the computational/numerical issues
involved in implementing this policy. We next explore the online setting,
showing that there exist adversaries that force any online scheduling policy to
miss due dates. This impossibility result motivates the problem of minimizing
the maximum interval stretch of any job; the interval stretch of a job is the
job's flow time divided by the job's due date minus release time. We show that
several common scheduling strategies, including the "hit-the-highest-nail"
strategy beloved by procrastinators, have arbitrarily large maximum interval
stretch. Then we give the "thrashing" scheduling policy and show that it is a
\Theta(1) approximation algorithm for the maximum interval stretch.Comment: 12 pages, 3 figure
Energy-efficient algorithms for non-preemptive speed-scaling
We improve complexity bounds for energy-efficient speed scheduling problems
for both the single processor and multi-processor cases. Energy conservation
has become a major concern, so revisiting traditional scheduling problems to
take into account the energy consumption has been part of the agenda of the
scheduling community for the past few years.
We consider the energy minimizing speed scaling problem introduced by Yao et
al. where we wish to schedule a set of jobs, each with a release date, deadline
and work volume, on a set of identical processors. The processors may change
speed as a function of time and the energy they consume is the th power
of its speed. The objective is then to find a feasible schedule which minimizes
the total energy used.
We show that in the setting with an arbitrary number of processors where all
work volumes are equal, there is a approximation algorithm, where
is the generalized Bell number. This is the first constant
factor algorithm for this problem. This algorithm extends to general unequal
processor-dependent work volumes, up to losing a factor of
in the approximation, where is the maximum
ratio between two work volumes. We then show this latter problem is APX-hard,
even in the special case when all release dates and deadlines are equal and
is 4.
In the single processor case, we introduce a new linear programming
formulation of speed scaling and prove that its integrality gap is at most
. As a corollary, we obtain a
approximation algorithm where there is a single processor, improving on the
previous best bound of
when
- …