24,892 research outputs found
Approximation Algorithms for Energy, Reliability, and Makespan Optimization Problems
International audienceWe consider the problem of scheduling an application on a parallel computational platform. The application is a particular task graph, either a linear chain of tasks, or a set of independent tasks. The platform is made of identical processors, whose speed can be dynamically modified. It is also subject to failures: if a processor is slowed down to decrease the energy consumption, it has a higher chance to fail. Therefore, the scheduling problem requires us to re-execute or replicate tasks (i.e., execute twice the same task, either on the same processor, or on two distinct processors), in order to increase the reliability. It is a tri-criteria problem: the goal is to minimize the energy consumption, while enforcing a bound on the total execution time (the makespan), and a constraint on the reliability of each task. Our main contribution is to propose approximation algorithms for linear chains of tasks and independent tasks. For linear chains, we design a fully polynomial-time approximation scheme. However, we show that there exists no constant factor approximation algorithm for independent tasks, unless P=NP, and we propose in this case an approximation algorithm with a relaxation on the makespan constraint
Efficient Algorithms for Scheduling Moldable Tasks
We study the problem of scheduling independent moldable tasks on
processors that arises in large-scale parallel computations. When tasks are
monotonic, the best known result is a -approximation
algorithm for makespan minimization with a complexity linear in and
polynomial in and where is
arbitrarily small. We propose a new perspective of the existing speedup models:
the speedup of a task is linear when the number of assigned
processors is small (up to a threshold ) while it presents
monotonicity when ranges in ; the bound
indicates an unacceptable overhead when parallelizing on too many processors.
For a given integer , let . In this paper, we propose a -approximation algorithm for makespan minimization with a
complexity where
(). As
a by-product, we also propose a -approximation algorithm for
throughput maximization with a common deadline with a complexity
GraphLab: A New Framework for Parallel Machine Learning
Designing and implementing efficient, provably correct parallel machine
learning (ML) algorithms is challenging. Existing high-level parallel
abstractions like MapReduce are insufficiently expressive while low-level tools
like MPI and Pthreads leave ML experts repeatedly solving the same design
challenges. By targeting common patterns in ML, we developed GraphLab, which
improves upon abstractions like MapReduce by compactly expressing asynchronous
iterative algorithms with sparse computational dependencies while ensuring data
consistency and achieving a high degree of parallel performance. We demonstrate
the expressiveness of the GraphLab framework by designing and implementing
parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and
Compressed Sensing. We show that using GraphLab we can achieve excellent
parallel performance on large scale real-world problems
Scheduling multiple divisible loads on a linear processor network
Min, Veeravalli, and Barlas have recently proposed strategies to minimize the
overall execution time of one or several divisible loads on a heterogeneous
linear network, using one or more installments. We show on a very simple
example that their approach does not always produce a solution and that, when
it does, the solution is often suboptimal. We also show how to find an optimal
schedule for any instance, once the number of installments per load is given.
Then, we formally state that any optimal schedule has an infinite number of
installments under a linear cost model as the one assumed in the original
papers. Therefore, such a cost model cannot be used to design practical
multi-installment strategies. Finally, through extensive simulations we
confirmed that the best solution is always produced by the linear programming
approach, while solutions of the original papers can be far away from the
optimal
Scheduling Monotone Moldable Jobs in Linear Time
A moldable job is a job that can be executed on an arbitrary number of
processors, and whose processing time depends on the number of processors
allotted to it. A moldable job is monotone if its work doesn't decrease for an
increasing number of allotted processors. We consider the problem of scheduling
monotone moldable jobs to minimize the makespan.
We argue that for certain compact input encodings a polynomial algorithm has
a running time polynomial in n and log(m), where n is the number of jobs and m
is the number of machines. We describe how monotony of jobs can be used to
counteract the increased problem complexity that arises from compact encodings,
and give tight bounds on the approximability of the problem with compact
encoding: it is NP-hard to solve optimally, but admits a PTAS.
The main focus of this work are efficient approximation algorithms. We
describe different techniques to exploit the monotony of the jobs for better
running times, and present a (3/2+{\epsilon})-approximate algorithm whose
running time is polynomial in log(m) and 1/{\epsilon}, and only linear in the
number n of jobs
Scheduling MapReduce Jobs under Multi-Round Precedences
We consider non-preemptive scheduling of MapReduce jobs with multiple tasks
in the practical scenario where each job requires several map-reduce rounds. We
seek to minimize the average weighted completion time and consider scheduling
on identical and unrelated parallel processors. For identical processors, we
present LP-based O(1)-approximation algorithms. For unrelated processors, the
approximation ratio naturally depends on the maximum number of rounds of any
job. Since the number of rounds per job in typical MapReduce algorithms is a
small constant, our scheduling algorithms achieve a small approximation ratio
in practice. For the single-round case, we substantially improve on previously
best known approximation guarantees for both identical and unrelated
processors. Moreover, we conduct an experimental analysis and compare the
performance of our algorithms against a fast heuristic and a lower bound on the
optimal solution, thus demonstrating their promising practical performance
- …