9,070 research outputs found

    Some combinational optimization problems on radio network communication and machine scheduling

    Get PDF
    The combinatorial optimization problems coming from two areas are studied in this dissertation: network communication and machine scheduling. In the network communication area, the complexity of distributed broadcasting and distributed gossiping is studied in the setting of random networks. Two different models are considered: one is random geometric networks, the main model used to study properties of sensor and ad-hoc networks, where ri points are randomly placed in a unit square and two points are connected by an edge if they are at most a certain fixed distance r from each other. The other model is the so-called line-of-sight networks, a new network model introduced recently by Frieze et al. (SODA\u2707). The nodes in this model are randomly placed (with probability p) on an n x n grid and a node can communicate with all the nodes that are in at most a certain fixed distance r and which are in the same row or column. It can be shown that in many scenarios of both models, the random structure of these networks makes it possible to perform distributed gossiping in asymptotically optimal time 0(D), where D is the diameter of the network. The simulation results show that most algorithms especially the randomized algorithm works very fast in practice. In the scheduling area, the first problem is online scheduling a set of equal processing time tasks with precedence constraints so as to minimize the makespan. It can be shown that Hu \u27s algorithm yields an asymptotic competitive ratio of 3/2 for intree precedence constraints and an asymptotic competitive ratio of 1 for outtree precedences, and Coffinan-Graham algorithm yields an asymptotic competitive ratio of 1 for arbitrary precedence constraints and two machines.The second scheduling problem is the integrated production and delivery scheduling with disjoint windows. In this problem, each job is associated with a time window, and a profit. A job must be finished within its time window to get the profit. The objective is to pick a set ofjobs and schedule them to get the maximum total profit. For a single machine and unit profit, an optimal algorithm is proposed. For a single machine and arbitrary profit, a fully polynomial time approximation scheme(FPTAS) is proposed. These algorithms can be extended to multiple machines with approximation ratio less than e/(e - 1). The third scheduling problem studied in this dissertation is the preemptive scheduling algorithms with nested and inclusive processing set restrictions. The objective is to minimize the makespan of the schedule. It can be shown that there is no optimal online algorithm even for the case of inclusive processing set. Then a linear time optimal algorithm is given for the case of nested processing set, where all jobs are available for processing at time t = 0. A more complicated algorithm with running time 0(n log ri) is given that produces not only optimal but also maximal schedules. When jobs have different release times, an optimal algorithm is given for the nested case and a faster optimal algorithm is given for the inclusive processing set case

    Scheduling MapReduce Jobs under Multi-Round Precedences

    Full text link
    We consider non-preemptive scheduling of MapReduce jobs with multiple tasks in the practical scenario where each job requires several map-reduce rounds. We seek to minimize the average weighted completion time and consider scheduling on identical and unrelated parallel processors. For identical processors, we present LP-based O(1)-approximation algorithms. For unrelated processors, the approximation ratio naturally depends on the maximum number of rounds of any job. Since the number of rounds per job in typical MapReduce algorithms is a small constant, our scheduling algorithms achieve a small approximation ratio in practice. For the single-round case, we substantially improve on previously best known approximation guarantees for both identical and unrelated processors. Moreover, we conduct an experimental analysis and compare the performance of our algorithms against a fast heuristic and a lower bound on the optimal solution, thus demonstrating their promising practical performance

    Algorithms for Hierarchical and Semi-Partitioned Parallel Scheduling

    Get PDF
    We propose a model for scheduling jobs in a parallel machine setting that takes into account the cost of migrations by assuming that the processing time of a job may depend on the specific set of machines among which the job is migrated. For the makespan minimization objective, the model generalizes classical scheduling problems such as unrelated parallel machine scheduling, as well as novel ones such as semi-partitioned and clustered scheduling. In the case of a hierarchical family of machines, we derive a compact integer linear programming formulation of the problem and leverage its fractional relaxation to obtain a polynomial-time 2-approximation algorithm. Extensions that incorporate memory capacity constraints are also discussed

    Competitive-Ratio Approximation Schemes for Minimizing the Makespan in the Online-List Model

    Full text link
    We consider online scheduling on multiple machines for jobs arriving one-by-one with the objective of minimizing the makespan. For any number of identical parallel or uniformly related machines, we provide a competitive-ratio approximation scheme that computes an online algorithm whose competitive ratio is arbitrarily close to the best possible competitive ratio. We also determine this value up to any desired accuracy. This is the first application of competitive-ratio approximation schemes in the online-list model. The result proves the applicability of the concept in different online models. We expect that it fosters further research on other online problems

    Parallel Machine Scheduling with Nested Processing Set Restrictions and Job Delivery Times

    Get PDF
    The problem of scheduling jobs with delivery times on parallel machines is studied, where each job can only be processed on a specific subset of the machines called its processing set. Two distinct processing sets are either nested or disjoint; that is, they do not partially overlap. All jobs are available for processing at time 0. The goal is to minimize the time by which all jobs are delivered, which is equivalent to minimizing the maximum lateness from the optimization viewpoint. A list scheduling approach is analyzed and its approximation ratio of 2 is established. In addition, a polynomial time approximation scheme is derived

    A deterministic truthful PTAS for scheduling related machines

    Full text link
    Scheduling on related machines (QCmaxQ||C_{\max}) is one of the most important problems in the field of Algorithmic Mechanism Design. Each machine is controlled by a selfish agent and her valuation can be expressed via a single parameter, her {\em speed}. In contrast to other similar problems, Archer and Tardos \cite{AT01} showed that an algorithm that minimizes the makespan can be truthfully implemented, although in exponential time. On the other hand, if we leave out the game-theoretic issues, the complexity of the problem has been completely settled -- the problem is strongly NP-hard, while there exists a PTAS \cite{HS88,ES04}. This problem is the most well studied in single-parameter algorithmic mechanism design. It gives an excellent ground to explore the boundary between truthfulness and efficient computation. Since the work of Archer and Tardos, quite a lot of deterministic and randomized mechanisms have been suggested. Recently, a breakthrough result \cite{DDDR08} showed that a randomized truthful PTAS exists. On the other hand, for the deterministic case, the best known approximation factor is 2.8 \cite{Kov05,Kov07}. It has been a major open question whether there exists a deterministic truthful PTAS, or whether truthfulness has an essential, negative impact on the computational complexity of the problem. In this paper we give a definitive answer to this important question by providing a truthful {\em deterministic} PTAS

    Experimental Analysis of Algorithms for Coflow Scheduling

    Full text link
    Modern data centers face new scheduling challenges in optimizing job-level performance objectives, where a significant challenge is the scheduling of highly parallel data flows with a common performance goal (e.g., the shuffle operations in MapReduce applications). Chowdhury and Stoica introduced the coflow abstraction to capture these parallel communication patterns, and Chowdhury et al. proposed effective heuristics to schedule coflows efficiently. In our previous paper, we considered the strongly NP-hard problem of minimizing the total weighted completion time of coflows with release dates, and developed the first polynomial-time scheduling algorithms with O(1)-approximation ratios. In this paper, we carry out a comprehensive experimental analysis on a Facebook trace and extensive simulated instances to evaluate the practical performance of several algorithms for coflow scheduling, including the approximation algorithms developed in our previous paper. Our experiments suggest that simple algorithms provide effective approximations of the optimal, and that the performance of our approximation algorithms is relatively robust, near optimal, and always among the best compared with the other algorithms, in both the offline and online settings.Comment: 29 pages, 8 figures, 11 table

    Control-based Scheduling in a Distributed Stream Processing System

    Get PDF
    Stream processing systems receive continuous streams of messages with raw information and produce streams of messages with processed information. The utility of a stream-processing system depends, in part, on the accuracy and timeliness of the output. Streams in complex event processing systems are processed on distributed systems; several steps are taken on different processors to process each incoming message, and messages may be enqueued between steps. This paper deals with the problems of distributed dynamic control of streams to optimize the total utility provided by the system. A challenge of distributed control is that timeliness of output depends only on the total end-toend time and is otherwise independent of the delays at each separate processor whereas the controller for each processor takes action to control only the steps on that processor and cannot directly control the entire network. This paper identifies key problems in distributed control and analyzes two scheduling algorithms that help in an initial analysis of a difficult problem

    An EPTAS for machine scheduling with bag-constraints

    Full text link
    Machine scheduling is a fundamental optimization problem in computer science. The task of scheduling a set of jobs on a given number of machines and minimizing the makespan is well studied and among other results, we know that EPTAS's for machine scheduling on identical machines exist. Das and Wiese initiated the research on a generalization of makespan minimization, that includes so called bag-constraints. In this variation of machine scheduling the given set of jobs is partitioned into subsets, so called bags. Given this partition a schedule is only considered feasible when on any machine there is at most one job from each bag. Das and Wiese showed that this variant of machine scheduling admits a PTAS. We will improve on this result by giving the first EPTAS for the machine scheduling problem with bag-constraints. We achieve this result by using new insights on this problem and restrictions given by the bag-constraints. We show that, to gain an approximate solution, we can relax the bag-constraints and ignore some of the restrictions. Our EPTAS uses a new instance transformation that will allow us to schedule large and small jobs independently of each other for a majority of bags. We also show that it is sufficient to respect the bag-constraint only among a constant number of bags, when scheduling large jobs. With these observations our algorithm will allow for some conflicts when computing a schedule and we show how to repair the schedule in polynomial-time by swapping certain jobs around
    corecore