68,831 research outputs found

    Nützliche Strukturen und wie sie zu finden sind: Nicht Approximierbarkeit und Approximationen für diverse Varianten des Parallel Task Scheduling Problems

    Get PDF
    In this thesis, we consider the Parallel Task Scheduling problem and several variants. This problem and its variations have diverse applications in theory and practice; for example, they appear as sub-problems in higher dimensional problems. In the Parallel Task Scheduling problem, we are given a set of jobs and a set of identical machines. Each job is a parallel task; i.e., it needs a fixed number of identical machines to be processed. A schedule assigns to each job a set of machines it is processed on and a starting time. It is feasible if at each point in time each machine processes at most one job. In a variant of this problem, called Strip Packing, the identical machines are arranged in a total order, and jobs can only allocate neighboring machines with regard to this total order. In this case, we speak of Contiguous Parallel Task Scheduling as well. In another variant, called Single Resource Constraint Scheduling, we are given an additional constraint on how many jobs can be processed at the same time. For these variants of the Parallel Task Scheduling problem, we consider an extension, where the set of machines is grouped into identical clusters. When scheduling a job, we are allowed to allocate machines from only one cluster to process the job. For all these considered problems, we close some gaps between inapproximation or hardness result and the best possible algorithm. For Parallel Task Scheduling we prove that it is strongly NP-hard if we are given precisely 4 machines. Before it was known that it is strongly NP-hard if we are given at least 5 machines, and there was an (exact) pseudo-polynomial time algorithm for up to 3 machines. For Strip Packing, we present an algorithm with approximation ratio (5/4 +ε) and prove that there is no approximation with ratio less than 5/4 unless P = NP. Concerning Single Resource Constraint Scheduling, it is not possible to find an algorithm with ratio smaller than 3/2, unless P = NP, and we present an algorithm with ratio (3/2 +ε). For the extensions to identical clusters, there can be no approximation algorithm with a ratio smaller than 2 unless P = NP. For the extensions of Strip Packing and Parallel Task Scheduling there are 2-approximations already, but they have a huge worst case running time. We present 2-approximations that have a linear running time for the extensions of Strip Packing, Parallel Task Scheduling, and Single Resource Constraint Scheduling for the case that at least three clusters are present and greatly improve the running time for two clusters. Finally, we consider three variants of Scheduling on Identical Machines with setup times. We present EPTAS results for all of them which is the best one can hope for since these problems are strongly NP-complete.In dieser Thesis untersuchen wir das Problem Parallel Task Scheduling und einige seiner Varianten. Dieses Problem und seine Variationen haben vielfältige Anwendungen in Theorie und Praxis. Beispielsweise treten sie als Teilprobleme in höherdimensionalen Problemen auf. Im Problem Parallel Task Scheduling erhalten wir eine Menge von Jobs und eine Menge identischer Maschinen. Jeder Job ist ein paralleler Task, d. h. er benötigt eine feste Anzahl der identischen Maschinen, um bearbeitet zu werden. Ein Schedule ordnet den Jobs die Maschinen zu, auf denen sie bearbeitet werden sollen, sowie einen festen Startzeitpunkt der Bearbeitung. Der Schedule ist gültig, wenn zu jedem Zeitpunkt jede Maschine höchstens einen Job bearbeitet. Beim Strip Packing Problem sind die identischen Maschinen in einer totalen Ordnung angeordnet und Jobs können nur benachbarte Maschinen in Bezug auf diese Ordnung nutzen. In dem Single Resource Constraint Scheduling Problem gibt es eine zusätzliche Einschränkung, wie viele Jobs gleichzeitig verarbeitet werden können. Für die genannten Varianten des Parallel Task Scheduling Problems betrachten wir eine Erweiterung, bei der die Maschinen in identische Cluster gruppiert sind. Bei der Bearbeitung eines Jobs dürfen in diesem Modell nur Maschinen aus einem Cluster genutzt werden. Für all diese Probleme schließen wir Lücken zwischen Nichtapproximierbarkeit und Algorithmen. Für Parallel Task Scheduling zeigen wir, dass es stark NP-vollständig ist, wenn genau 4 Maschinen gegeben sind. Vorher war ein pseudopolynomieller Algorithmus für bis zu 3 Maschinen bekannt, sowie dass dieses Problem stark NP-vollständig ist für 5 oder mehr Maschinen. Für Strip Packing zeigen wir, dass es keinen pseudopolynomiellen Algorithmus gibt, der eine Güte besser als 5/4 besitzt und geben einen pseudopolynomiellen Algorithmus mit Güte (5/4 +ε) an. Für Single Resource Constraint Scheduling ist die bestmögliche Güte eine 3/2-Approximation und wir präsentieren eine (3/2 +ε)-Approximation. Für die Erweiterung auf identische Cluster gibt es keine Approximation mit Güte besser als 2. Vor unseren Untersuchungen waren bereits Algorithmen mit Güte 2 bekannt, die jedoch gigantische Worst-Case Laufzeiten haben. Wir geben für alle drei Varianten 2-Approximationen mit linearer Laufzeit an, sofern mindestens drei Cluster gegeben sind. Schlussendlich betrachten wir noch Scheduling auf Identischen Maschinen mit Setup Zeiten. Wir entwickeln für drei untersuche Varianten dieses Problems jeweils einen EPTAS, wobei ein EPTAS das beste ist, auf das man hoffen kann, es sei denn es gilt P = NP

    Scheduling under Linear Constraints

    Full text link
    We introduce a parallel machine scheduling problem in which the processing times of jobs are not given in advance but are determined by a system of linear constraints. The objective is to minimize the makespan, i.e., the maximum job completion time among all feasible choices. This novel problem is motivated by various real-world application scenarios. We discuss the computational complexity and algorithms for various settings of this problem. In particular, we show that if there is only one machine with an arbitrary number of linear constraints, or there is an arbitrary number of machines with no more than two linear constraints, or both the number of machines and the number of linear constraints are fixed constants, then the problem is polynomial-time solvable via solving a series of linear programming problems. If both the number of machines and the number of constraints are inputs of the problem instance, then the problem is NP-Hard. We further propose several approximation algorithms for the latter case.Comment: 21 page

    Towards Optimality in Parallel Scheduling

    Full text link
    To keep pace with Moore's law, chip designers have focused on increasing the number of cores per chip rather than single core performance. In turn, modern jobs are often designed to run on any number of cores. However, to effectively leverage these multi-core chips, one must address the question of how many cores to assign to each job. Given that jobs receive sublinear speedups from additional cores, there is an obvious tradeoff: allocating more cores to an individual job reduces the job's runtime, but in turn decreases the efficiency of the overall system. We ask how the system should schedule jobs across cores so as to minimize the mean response time over a stream of incoming jobs. To answer this question, we develop an analytical model of jobs running on a multi-core machine. We prove that EQUI, a policy which continuously divides cores evenly across jobs, is optimal when all jobs follow a single speedup curve and have exponentially distributed sizes. EQUI requires jobs to change their level of parallelization while they run. Since this is not possible for all workloads, we consider a class of "fixed-width" policies, which choose a single level of parallelization, k, to use for all jobs. We prove that, surprisingly, it is possible to achieve EQUI's performance without requiring jobs to change their levels of parallelization by using the optimal fixed level of parallelization, k*. We also show how to analytically derive the optimal k* as a function of the system load, the speedup curve, and the job size distribution. In the case where jobs may follow different speedup curves, finding a good scheduling policy is even more challenging. We find that policies like EQUI which performed well in the case of a single speedup function now perform poorly. We propose a very simple policy, GREEDY*, which performs near-optimally when compared to the numerically-derived optimal policy

    The Impact of Data Replicatino on Job Scheduling Performance in Hierarchical data Grid

    Full text link
    In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies.Comment: 11 pages, 7 figure
    corecore