30 research outputs found

    Improved bounds for speed scaling in devices obeying the cube-root rule

    Get PDF
    Speed scaling is a power management technology that involves dynamically changing the speed of a processor. This technology gives rise to dual-objective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedule. In the most investigated speed scaling problem in the literature, the QoS constraint is deadline feasibility, and the objective is to minimize the energy used. The standard assumption is that the processor power is of the form s^a where s is the processor speed, and a>1 is some constant; a˜3 for CMOS based processors. In this paper we introduce and analyze a natural class of speed scaling algorithms, that we call qOA. The algorithm qOA sets the speed of the processor to be q times the speed that the optimal offline algorithm would run the jobs in the current state. When a=3, we show that qOA is 6.7-competitive, improving upon the previous best guarantee of 27 achieved by the algorithm Optimal Available (OA). We also give almost matching upper and lower bounds for qOA for general a. Finally, we give the first non-trivial lower bound, namely e^(a-1) / a, on the competitive ratio of a general deterministic online algorithm for this problem

    Divorcing made easy

    No full text
    We discuss the proportionally fair allocation of a set of indivisible items to k agents. We assume that each agent specifies only a ranking of the items from best to worst. Agents do not specify their valuations of the items. An allocation is proportionally fair if all agents believe that they have received their fair share of the value according to how they value the items. We give simple conditions (and a fast algorithm) for determining whether the agents rankings give sufficient information to determine a proportionally fair allocation. An important special case is a divorce situation with two agents. For such a divorce situation, we provide a particularly simple allocation rule that should have applications in the real world

    Server scheduling in the Lp norm: A rising tide lifts all boat

    No full text
    Often server systems do not implement the best known algorithms for optimizing average Quality of Service (QoS) out of concern of that these algorithms may be insufficiently fair to individual jobs. The standard method for balancing average QoS and fairness is optimize the Lp metric, 1 <p <8. Thus we consider server scheduling strategies to optimize the Lp norms of the standard QoS measures, flow and stretch. We first show that there is no no(1)-competitive online algorithm for the Lp norms of either flow or stretch. We then show that the standard clairvoyant algorithms for optimizing average QoS, SJF and SRPT, are O(1+e)-speed O(1/e)-competitive for the Lp norms of flow and stretch. And that the standard nonclairvoyant algorithm for optimizing average QoS, SETF, is O(1+e)-speed O(1/e(2+2/p))-competitive for the Lp norms of flow. These results argue that these standard algorithms will not starve jobs until the system is near peak capacity. In contrast, we show that the Round Robin, or Processor Sharing algorithm, which is sometimes adopted because of its seeming fairness properties, is not O(1+e)-speed no(1)-competitive for sufficiently small e

    Approximation schemes for a class of subset selection problems

    Get PDF
    In this paper we develop an easily applicable algorithmic technique/tool for developing approximation schemes for certain types of combinatorial optimization problems. Special cases that are covered by our result show up in many places in the literature. For every such special case, a particular rounding trick has been implemented in a slightly different way, with slightly different arguments, and with slightly different worst case estimations. Usually, the rounding procedure depended on certain upper or lower bounds on the optimal objective value that have to be justified in a separate argument. Our easily applied result unifies many of these results, and sometimes it even leads to a simpler proof. We demonstrate how our result can be easily applied to a broad family of combinatorial optimization problems. As a special case, we derive the existence of an FPTAS for the scheduling problem of minimizing the weighted number of late jobs under release dates and preemption on a single machine. The approximability status of this problem has been open for some time

    Server scheduling to balance priorieties, fairness, and average qualityof service

    No full text
    Often server systems do not implement the best known algorithms for optimizing average Quality of Service (QoS) out of concern that these algorithms may be insufficiently fair to individual jobs. The standard method for balancing average QoS and fairness is to optimize the β„“p\ell_p norm, $

    Server scheduling to balance priorieties, fairness, and average qualityof service

    Get PDF
    Often server systems do not implement the best known algorithms for optimizing average Quality of Service (QoS) out of concern that these algorithms may be insufficiently fair to individual jobs. The standard method for balancing average QoS and fairness is to optimize the β„“p\ell_p norm, $

    Competitive algorithms for due date scheduling

    No full text
    We consider several online scheduling problems that arise when customers request make-to-order products from a company. At the time of the order, the company must quote a due date to the customer. To satisfy the customer, the company must produce the good by the due date. The company must have an online algorithm with two components: The first component sets the due dates, and the second component schedules the resulting jobs with the goal of meeting the due dates. The most basic quality of service measure for a job is the quoted lead time, which is the difference between the due date and the release time. We first consider the basic problem of minimizing the average quoted lead time. We show that there is a (1+e)-speed (log⁑kϡ) (\frac{\log k}{\epsilon})-competitive algorithm for this problem (here k is the ratio of the maximum work of a job to the minimum work of a job), and that this algorithm is essentially optimally competitive. This result extends to the case that each job has a weight and the objective is weighted quoted lead time. We then introduce the following general setting: there is a non-increasing profit function p i (t) associated with each job J i . If the customer for job J i is quoted a due date of d i , then the profit obtained from completing this job by its due date is p i (d i ). We consider the objective of maximizing profits. We show that if the company must finish each job by its due date, then there is no O(1)-speed poly-log-competitive algorithm. However, if the company can miss the due date of a job, at the cost of forgoing the profits from that job, then we show that there is a (1+e)-speed O(1+1/e)-competitive algorithm, and that this algorithm is essentially optimally competitive

    Speed scaling with an arbitrary power function

    No full text
    This article initiates a theoretical investigation into online scheduling problems with speed scaling where the allowable speeds may be discrete, and the power function may be arbitrary, and develops algorithmic analysis techniques for this setting. We show that a natural algorithm, which uses Shortest Remaining Processing Time for scheduling and sets the power to be one more than the number of unfinished jobs, is 3-competitive for the objective of total flow time plus energy. We also show that another natural algorithm, which uses Highest Density First for scheduling and sets the power to be the fractional weight of the unfinished jobs, is a 2-competitive algorithm for the objective of fractional weighted flow time plus energy

    Speed scaling for weighted flow time

    No full text
    In addition to the traditional goal of efficiently managing time and space, many computers now need to efficiently manage power usage. For example, Intel's SpeedStep and AMD's PowerNOW technologies allow the Windows XP operating system to dynamically change the speed of the processor to prolong battery life. In this setting, the operating system must not only have a job selection policy to determine which job to run, but also a speed scaling policy to determine the speed at which the job will be run. These policies must be online since the operating system does not in general have knowledge of the future. In current CMOS based processors, the speed satisfies the well known cube-root-rule, that the speed is approximately the cube root of the power [Mud01, BBS+00]. Thus, in this work, we make the standard generalization that the power is equal to speed to some power a = 1, where one should think of a as being approximately 3 [YDS95, BKP04]. Energy is power integrated over time. The operating system is faced with a dual objective optimization problem as it both wants to conserve energy, and optimize some Quality of Service (QoS) measure of the resulting schedule
    corecore