2,377 research outputs found
SELFISHMIGRATE: A Scalable Algorithm for Non-clairvoyantly Scheduling Heterogeneous Processors
We consider the classical problem of minimizing the total weighted flow-time
for unrelated machines in the online \emph{non-clairvoyant} setting. In this
problem, a set of jobs arrive over time to be scheduled on a set of
machines. Each job has processing length , weight , and is
processed at a rate of when scheduled on machine . The online
scheduler knows the values of and upon arrival of the job,
but is not aware of the quantity . We present the {\em first} online
algorithm that is {\em scalable} ((1+\eps)-speed
-competitive for any constant \eps > 0) for the
total weighted flow-time objective. No non-trivial results were known for this
setting, except for the most basic case of identical machines. Our result
resolves a major open problem in online scheduling theory. Moreover, we also
show that no job needs more than a logarithmic number of migrations. We further
extend our result and give a scalable algorithm for the objective of minimizing
total weighted flow-time plus energy cost for the case of unrelated machines
and obtain a scalable algorithm. The key algorithmic idea is to let jobs
migrate selfishly until they converge to an equilibrium. Towards this end, we
define a game where each job's utility which is closely tied to the
instantaneous increase in the objective the job is responsible for, and each
machine declares a policy that assigns priorities to jobs based on when they
migrate to it, and the execution speeds. This has a spirit similar to
coordination mechanisms that attempt to achieve near optimum welfare in the
presence of selfish agents (jobs). To the best our knowledge, this is the first
work that demonstrates the usefulness of ideas from coordination mechanisms and
Nash equilibria for designing and analyzing online algorithms
Energy-Efficient Multiprocessor Scheduling for Flow Time and Makespan
We consider energy-efficient scheduling on multiprocessors, where the speed
of each processor can be individually scaled, and a processor consumes power
when running at speed , for . A scheduling algorithm
needs to decide at any time both processor allocations and processor speeds for
a set of parallel jobs with time-varying parallelism. The objective is to
minimize the sum of the total energy consumption and certain performance
metric, which in this paper includes total flow time and makespan. For both
objectives, we present instantaneous parallelism clairvoyant (IP-clairvoyant)
algorithms that are aware of the instantaneous parallelism of the jobs at any
time but not their future characteristics, such as remaining parallelism and
work. For total flow time plus energy, we present an -competitive
algorithm, which significantly improves upon the best known non-clairvoyant
algorithm and is the first constant competitive result on multiprocessor speed
scaling for parallel jobs. In the case of makespan plus energy, which is
considered for the first time in the literature, we present an
-competitive algorithm, where is the total number of
processors. We show that this algorithm is asymptotically optimal by providing
a matching lower bound. In addition, we also study non-clairvoyant scheduling
for total flow time plus energy, and present an algorithm that achieves -competitive for jobs with arbitrary release time and
-competitive for jobs with identical release time. Finally,
we prove an lower bound on the competitive ratio of
any non-clairvoyant algorithm, matching the upper bound of our algorithm for
jobs with identical release time
Immediation|toward the selfless other?
One should hear the calling of two hyperbolic selfs within this questioning concerning what I would propose to call here the desire for panop-tech-clair-voyance: Selfless interpreted as an infinite reactivity (machinery)
Selfless interpreted as an infinite responsibility (agency
Approximating k-Forest with Resource Augmentation: A Primal-Dual Approach
In this paper, we study the -forest problem in the model of resource
augmentation. In the -forest problem, given an edge-weighted graph ,
a parameter , and a set of demand pairs , the
objective is to construct a minimum-cost subgraph that connects at least
demands. The problem is hard to approximate---the best-known approximation
ratio is . Furthermore, -forest is as hard to
approximate as the notoriously-hard densest -subgraph problem.
While the -forest problem is hard to approximate in the worst-case, we
show that with the use of resource augmentation, we can efficiently approximate
it up to a constant factor.
First, we restate the problem in terms of the number of demands that are {\em
not} connected. In particular, the objective of the -forest problem can be
viewed as to remove at most demands and find a minimum-cost subgraph that
connects the remaining demands. We use this perspective of the problem to
explain the performance of our algorithm (in terms of the augmentation) in a
more intuitive way.
Specifically, we present a polynomial-time algorithm for the -forest
problem that, for every , removes at most demands and has
cost no more than times the cost of an optimal algorithm
that removes at most demands
Minimizing Flow Time in the Wireless Gathering Problem
We address the problem of efficient data gathering in a wireless network
through multi-hop communication. We focus on the objective of minimizing the
maximum flow time of a data packet. We prove that no polynomial time algorithm
for this problem can have approximation ratio less than \Omega(m^{1/3) when
packets have to be transmitted, unless . We then use resource
augmentation to assess the performance of a FIFO-like strategy. We prove that
this strategy is 5-speed optimal, i.e., its cost remains within the optimal
cost if we allow the algorithm to transmit data at a speed 5 times higher than
that of the optimal solution we compare to
Graceful Degradation in Semi-Clairvoyant Scheduling
In the Vestal model of mixed-criticality systems, jobs are characterized by multiple different estimates of their actual, but unknown, worst-case execution time (WCET) parameters. Some recent research has focused upon a semi-clairvoyant model for mixed-criticality systems in which it is assumed that each job reveals upon arrival which of its WCET parameters it will respect. We study the problem of scheduling such semi-clairvoyant systems to ensure graceful degradation of service to less critical jobs in the event that the systems exhibit high-criticality behavior. We propose multiple different interpretations of graceful degradation in such systems, and derive efficient scheduling algorithms that are capable of ensuring graceful degradation under these different interpretations
Online Scheduling on Identical Machines using SRPT
Due to its optimality on a single machine for the problem of minimizing
average flow time, Shortest-Remaining-Processing-Time (\srpt) appears to be the
most natural algorithm to consider for the problem of minimizing average flow
time on multiple identical machines. It is known that \srpt achieves the best
possible competitive ratio on multiple machines up to a constant factor. Using
resource augmentation, \srpt is known to achieve total flow time at most that
of the optimal solution when given machines of speed . Further,
it is known that \srpt's competitive ratio improves as the speed increases;
\srpt is -speed -competitive when .
However, a gap has persisted in our understanding of \srpt. Before this
work, the performance of \srpt was not known when \srpt is given
(1+\eps)-speed when 0 < \eps < 1-\frac{1}{m}, even though it has been
thought that \srpt is (1+\eps)-speed -competitive for over a decade.
Resolving this question was suggested in Open Problem 2.9 from the survey
"Online Scheduling" by Pruhs, Sgall, and Torng \cite{PruhsST}, and we answer
the question in this paper. We show that \srpt is \emph{scalable} on
identical machines. That is, we show \srpt is (1+\eps)-speed
O(\frac{1}{\eps})-competitive for \eps >0. We complement this by showing
that \srpt is (1+\eps)-speed O(\frac{1}{\eps^2})-competitive for the
objective of minimizing the -norms of flow time on identical
machines. Both of our results rely on new potential functions that capture the
structure of \srpt. Our results, combined with previous work, show that \srpt
is the best possible online algorithm in essentially every aspect when
migration is permissible.Comment: Accepted for publication at SODA. This version fixes an error in a
preliminary versio
Distributed Processes, Distributed Cognizers and Collaborative Cognition
Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (βknow-howβ) This is called the Turing Test. It cannot test whether a process can generate feeling, hence thinking -- only whether it can generate doing. The processes that generate thinking and know-how are βdistributedβ within the heads of thinkers, but not across thinkersβ heads. Hence there is no such thing as distributed cognition, only collaborative cognition. Email and the Web have spawned a new form of collaborative cognition that draws upon individual brainsβ real-time interactive potential in ways that were not possible in oral, written or print interactions
- β¦