12,985 research outputs found

    New Results on Online Resource Minimization

    Full text link
    We consider the online resource minimization problem in which jobs with hard deadlines arrive online over time at their release dates. The task is to determine a feasible schedule on a minimum number of machines. We rigorously study this problem and derive various algorithms with small constant competitive ratios for interesting restricted problem variants. As the most important special case, we consider scheduling jobs with agreeable deadlines. We provide the first constant ratio competitive algorithm for the non-preemptive setting, which is of particular interest with regard to the known strong lower bound of n for the general problem. For the preemptive setting, we show that the natural algorithm LLF achieves a constant ratio for agreeable jobs, while for general jobs it has a lower bound of Omega(n^(1/3)). We also give an O(log n)-competitive algorithm for the general preemptive problem, which improves upon the known O(p_max/p_min)-competitive algorithm. Our algorithm maintains a dynamic partition of the job set into loose and tight jobs and schedules each (temporal) subset individually on separate sets of machines. The key is a characterization of how the decrease in the relative laxity of jobs influences the optimum number of machines. To achieve this we derive a compact expression of the optimum value, which might be of independent interest. We complement the general algorithmic result by showing lower bounds that rule out that other known algorithms may yield a similar performance guarantee

    Scheduling of unit-length jobs with bipartite incompatibility graphs on four uniform machines

    Full text link
    In the paper we consider the problem of scheduling nn identical jobs on 4 uniform machines with speeds s1s2s3s4,s_1 \geq s_2 \geq s_3 \geq s_4, respectively. Our aim is to find a schedule with a minimum possible length. We assume that jobs are subject to some kind of mutual exclusion constraints modeled by a bipartite incompatibility graph of degree Δ\Delta, where two incompatible jobs cannot be processed on the same machine. We show that the problem is NP-hard even if s1=s2=s3s_1=s_2=s_3. If, however, Δ4\Delta \leq 4 and s112s2s_1 \geq 12 s_2, s2=s3=s4s_2=s_3=s_4, then the problem can be solved to optimality in time O(n1.5)O(n^{1.5}). The same algorithm returns a solution of value at most 2 times optimal provided that s12s2s_1 \geq 2s_2. Finally, we study the case s1s2s3=s4s_1 \geq s_2 \geq s_3=s_4 and give an O(n1.5)O(n^{1.5})-time 32/1532/15-approximation algorithm in all such situations

    Online makespan scheduling with job migration on uniform machines

    Get PDF
    In the classic minimum makespan scheduling problem, we are given an input sequence of n jobs with sizes. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this paper, we consider online scheduling algorithms without preemption. However, we allow the online algorithm to reassign up to k jobs to different machines in the final assignment. For m identical machines, Albers and Hellwig (Algorithmica, 2017) give tight bounds on the competitive ratio in this model. The precise ratio depends on, and increases with, m. It lies between 4/3 and ~~ 1.4659. They show that k = O(m) is sufficient to achieve this bound and no k = o(n) can result in a better bound. We study m uniform machines, i.e., machines with different speeds, and show that this setting is strictly harder. For sufficiently large m, there is a delta = Theta(1) such that, for m machines with only two different machine speeds, no online algorithm can achieve a competitive ratio of less than 1.4659 + delta with k = o(n). We present a new algorithm for the uniform machine setting. Depending on the speeds of the machines, our scheduling algorithm achieves a competitive ratio that lies between 4/3 and ~~ 1.7992 with k = O(m). We also show that k = Omega(m) is necessary to achieve a competitive ratio below 2. Our algorithm is based on a subtle imbalance with respect to the completion times of the machines, complemented by a bicriteria approximation algorithm that minimizes the makespan and maximizes the average completion time for certain sets of machines

    Impliance: A Next Generation Information Management Appliance

    Full text link
    ably successful in building a large market and adapting to the changes of the last three decades, its impact on the broader market of information management is surprisingly limited. If we were to design an information management system from scratch, based upon today's requirements and hardware capabilities, would it look anything like today's database systems?" In this paper, we introduce Impliance, a next-generation information management system consisting of hardware and software components integrated to form an easy-to-administer appliance that can store, retrieve, and analyze all types of structured, semi-structured, and unstructured information. We first summarize the trends that will shape information management for the foreseeable future. Those trends imply three major requirements for Impliance: (1) to be able to store, manage, and uniformly query all data, not just structured records; (2) to be able to scale out as the volume of this data grows; and (3) to be simple and robust in operation. We then describe four key ideas that are uniquely combined in Impliance to address these requirements, namely the ideas of: (a) integrating software and off-the-shelf hardware into a generic information appliance; (b) automatically discovering, organizing, and managing all data - unstructured as well as structured - in a uniform way; (c) achieving scale-out by exploiting simple, massive parallel processing, and (d) virtualizing compute and storage resources to unify, simplify, and streamline the management of Impliance. Impliance is an ambitious, long-term effort to define simpler, more robust, and more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but, you must attribute the work to the author and CIDR 2007. 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 710, 2007, Asilomar, California, US

    Cardinality Constrained Scheduling in Online Models

    Get PDF
    Makespan minimization on parallel identical machines is a classical and intensively studied problem in scheduling, and a classic example for online algorithm analysis with Graham's famous list scheduling algorithm dating back to the 1960s. In this problem, jobs arrive over a list and upon an arrival, the algorithm needs to assign the job to a machine. The goal is to minimize the makespan, that is, the maximum machine load. In this paper, we consider the variant with an additional cardinality constraint: The algorithm may assign at most kk jobs to each machine where kk is part of the input. While the offline (strongly NP-hard) variant of cardinality constrained scheduling is well understood and an EPTAS exists here, no non-trivial results are known for the online variant. We fill this gap by making a comprehensive study of various different online models. First, we show that there is a constant competitive algorithm for the problem and further, present a lower bound of 22 on the competitive ratio of any online algorithm. Motivated by the lower bound, we consider a semi-online variant where upon arrival of a job of size pp, we are allowed to migrate jobs of total size at most a constant times pp. This constant is called the migration factor of the algorithm. Algorithms with small migration factors are a common approach to bridge the performance of online algorithms and offline algorithms. One can obtain algorithms with a constant migration factor by rounding the size of each incoming job and then applying an ordinal algorithm to the resulting rounded instance. With this in mind, we also consider the framework of ordinal algorithms and characterize the competitive ratio that can be achieved using the aforementioned approaches.Comment: An extended abstract will appear in the proceedings of STACS'2

    Scheduling with processing set restrictions : a survey

    Get PDF
    2008-2009 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Dynamic Windows Scheduling with Reallocation

    Full text link
    We consider the Windows Scheduling problem. The problem is a restricted version of Unit-Fractions Bin Packing, and it is also called Inventory Replenishment in the context of Supply Chain. In brief, the problem is to schedule the use of communication channels to clients. Each client ci is characterized by an active cycle and a window wi. During the period of time that any given client ci is active, there must be at least one transmission from ci scheduled in any wi consecutive time slots, but at most one transmission can be carried out in each channel per time slot. The goal is to minimize the number of channels used. We extend previous online models, where decisions are permanent, assuming that clients may be reallocated at some cost. We assume that such cost is a constant amount paid per reallocation. That is, we aim to minimize also the number of reallocations. We present three online reallocation algorithms for Windows Scheduling. We evaluate experimentally these protocols showing that, in practice, all three achieve constant amortized reallocations with close to optimal channel usage. Our simulations also expose interesting trade-offs between reallocations and channel usage. We introduce a new objective function for WS with reallocations, that can be also applied to models where reallocations are not possible. We analyze this metric for one of the algorithms which, to the best of our knowledge, is the first online WS protocol with theoretical guarantees that applies to scenarios where clients may leave and the analysis is against current load rather than peak load. Using previous results, we also observe bounds on channel usage for one of the algorithms.Comment: 6 figure
    corecore