4,141 research outputs found

    Energy-efficient algorithms for non-preemptive speed-scaling

    Full text link
    We improve complexity bounds for energy-efficient speed scheduling problems for both the single processor and multi-processor cases. Energy conservation has become a major concern, so revisiting traditional scheduling problems to take into account the energy consumption has been part of the agenda of the scheduling community for the past few years. We consider the energy minimizing speed scaling problem introduced by Yao et al. where we wish to schedule a set of jobs, each with a release date, deadline and work volume, on a set of identical processors. The processors may change speed as a function of time and the energy they consume is the α\alphath power of its speed. The objective is then to find a feasible schedule which minimizes the total energy used. We show that in the setting with an arbitrary number of processors where all work volumes are equal, there is a 2(1+ε)(5(1+ε))α1B~α=Oα(1)2(1+\varepsilon)(5(1+\varepsilon))^{\alpha -1}\tilde{B}_{\alpha}=O_{\alpha}(1) approximation algorithm, where B~α\tilde{B}_{\alpha} is the generalized Bell number. This is the first constant factor algorithm for this problem. This algorithm extends to general unequal processor-dependent work volumes, up to losing a factor of ((1+r)r2)α(\frac{(1+r)r}{2})^{\alpha} in the approximation, where rr is the maximum ratio between two work volumes. We then show this latter problem is APX-hard, even in the special case when all release dates and deadlines are equal and rr is 4. In the single processor case, we introduce a new linear programming formulation of speed scaling and prove that its integrality gap is at most 12α112^{\alpha -1}. As a corollary, we obtain a (12(1+ε))α1(12(1+\varepsilon))^{\alpha -1} approximation algorithm where there is a single processor, improving on the previous best bound of 2α1(1+ε)αB~α2^{\alpha-1}(1+\varepsilon)^{\alpha}\tilde{B}_{\alpha} when α25\alpha \ge 25

    Energy Efficient Scheduling and Routing via Randomized Rounding

    Get PDF
    We propose a unifying framework based on configuration linear programs and randomized rounding, for different energy optimization problems in the dynamic speed-scaling setting. We apply our framework to various scheduling and routing problems in heterogeneous computing and networking environments. We first consider the energy minimization problem of scheduling a set of jobs on a set of parallel speed scalable processors in a fully heterogeneous setting. For both the preemptive-non-migratory and the preemptive-migratory variants, our approach allows us to obtain solutions of almost the same quality as for the homogeneous environment. By exploiting the result for the preemptive-non-migratory variant, we are able to improve the best known approximation ratio for the single processor non-preemptive problem. Furthermore, we show that our approach allows to obtain a constant-factor approximation algorithm for the power-aware preemptive job shop scheduling problem. Finally, we consider the min-power routing problem where we are given a network modeled by an undirected graph and a set of uniform demands that have to be routed on integral routes from their sources to their destinations so that the energy consumption is minimized. We improve the best known approximation ratio for this problem.Comment: 27 page

    New Results for Non-preemptive Speed Scaling

    No full text
    We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is P(s)=sαP(s) = s^\alpha, where ss is the processing speed, and α>1\alpha > 1 is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption. The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless NPDTIME(2poly(logn))NP \subseteq DTIME(2^{poly(\log n)}). The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a 2α2^\alpha approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs)

    A Fully Polynomial-Time Approximation Scheme for Speed Scaling with Sleep State

    Full text link
    We study classical deadline-based preemptive scheduling of tasks in a computing environment equipped with both dynamic speed scaling and sleep state capabilities: Each task is specified by a release time, a deadline and a processing volume, and has to be scheduled on a single, speed-scalable processor that is supplied with a sleep state. In the sleep state, the processor consumes no energy, but a constant wake-up cost is required to transition back to the active state. In contrast to speed scaling alone, the addition of a sleep state makes it sometimes beneficial to accelerate the processing of tasks in order to transition the processor to the sleep state for longer amounts of time and incur further energy savings. The goal is to output a feasible schedule that minimizes the energy consumption. Since the introduction of the problem by Irani et al. [16], its exact computational complexity has been repeatedly posed as an open question (see e.g. [2,8,15]). The currently best known upper and lower bounds are a 4/3-approximation algorithm and NP-hardness due to [2] and [2,17], respectively. We close the aforementioned gap between the upper and lower bound on the computational complexity of speed scaling with sleep state by presenting a fully polynomial-time approximation scheme for the problem. The scheme is based on a transformation to a non-preemptive variant of the problem, and a discretization that exploits a carefully defined lexicographical ordering among schedules

    Speed-scaling with no Preemptions

    Full text link
    We revisit the non-preemptive speed-scaling problem, in which a set of jobs have to be executed on a single or a set of parallel speed-scalable processor(s) between their release dates and deadlines so that the energy consumption to be minimized. We adopt the speed-scaling mechanism first introduced in [Yao et al., FOCS 1995] according to which the power dissipated is a convex function of the processor's speed. Intuitively, the higher is the speed of a processor, the higher is the energy consumption. For the single-processor case, we improve the best known approximation algorithm by providing a (1+ϵ)αB~α(1+\epsilon)^{\alpha}\tilde{B}_{\alpha}-approximation algorithm, where B~α\tilde{B}_{\alpha} is a generalization of the Bell number. For the multiprocessor case, we present an approximation algorithm of ratio B~α((1+ϵ)(1+wmaxwmin))α\tilde{B}_{\alpha}((1+\epsilon)(1+\frac{w_{\max}}{w_{\min}}))^{\alpha} improving the best known result by a factor of (52)α1(wmaxwmin)α(\frac{5}{2})^{\alpha-1}(\frac{w_{\max}}{w_{\min}})^{\alpha}. Notice that our result holds for the fully heterogeneous environment while the previous known result holds only in the more restricted case of parallel processors with identical power functions

    Online Non-preemptive Scheduling on Unrelated Machines with Rejections

    Full text link
    When a computer system schedules jobs there is typically a significant cost associated with preempting a job during execution. This cost can be from the expensive task of saving the memory's state and loading data into and out of memory. It is desirable to schedule jobs non-preemptively to avoid the costs of preemption. There is a need for non-preemptive system schedulers on desktops, servers and data centers. Despite this need, there is a gap between theory and practice. Indeed, few non-preemptive \emph{online} schedulers are known to have strong foundational guarantees. This gap is likely due to strong lower bounds on any online algorithm for popular objectives. Indeed, typical worst case analysis approaches, and even resource augmented approaches such as speed augmentation, result in all algorithms having poor performance guarantees. This paper considers on-line non-preemptive scheduling problems in the worst-case rejection model where the algorithm is allowed to reject a small fraction of jobs. By rejecting only a few jobs, this paper shows that the strong lower bounds can be circumvented. This approach can be used to discover algorithmic scheduling policies with desirable worst-case guarantees. Specifically, the paper presents algorithms for the following two objectives: minimizing the total flow-time and minimizing the total weighted flow-time plus energy under the speed-scaling mechanism. The algorithms have a small constant competitive ratio while rejecting only a constant fraction of jobs. Beyond specific results, the paper asserts that alternative models beyond speed augmentation should be explored to aid in the discovery of good schedulers in the face of the requirement of being online and non-preemptive

    Throughput Maximization in Multiprocessor Speed-Scaling

    Full text link
    We are given a set of nn jobs that have to be executed on a set of mm speed-scalable machines that can vary their speeds dynamically using the energy model introduced in [Yao et al., FOCS'95]. Every job jj is characterized by its release date rjr_j, its deadline djd_j, its processing volume pi,jp_{i,j} if jj is executed on machine ii and its weight wjw_j. We are also given a budget of energy EE and our objective is to maximize the weighted throughput, i.e. the total weight of jobs that are completed between their respective release dates and deadlines. We propose a polynomial-time approximation algorithm where the preemption of the jobs is allowed but not their migration. Our algorithm uses a primal-dual approach on a linearized version of a convex program with linear constraints. Furthermore, we present two optimal algorithms for the non-preemptive case where the number of machines is bounded by a fixed constant. More specifically, we consider: {\em (a)} the case of identical processing volumes, i.e. pi,j=pp_{i,j}=p for every ii and jj, for which we present a polynomial-time algorithm for the unweighted version, which becomes a pseudopolynomial-time algorithm for the weighted throughput version, and {\em (b)} the case of agreeable instances, i.e. for which rirjr_i \le r_j if and only if didjd_i \le d_j, for which we present a pseudopolynomial-time algorithm. Both algorithms are based on a discretization of the problem and the use of dynamic programming

    Throughput Maximization in the Speed-Scaling Setting

    Get PDF
    We are given a set of nn jobs and a single processor that can vary its speed dynamically. Each job JjJ_j is characterized by its processing requirement (work) pjp_j, its release date rjr_j and its deadline djd_j. We are also given a budget of energy EE and we study the scheduling problem of maximizing the throughput (i.e. the number of jobs which are completed on time). We propose a dynamic programming algorithm that solves the preemptive case of the problem, i.e. when the execution of the jobs may be interrupted and resumed later, in pseudo-polynomial time. Our algorithm can be adapted for solving the weighted version of the problem where every job is associated with a weight wjw_j and the objective is the maximization of the sum of the weights of the jobs that are completed on time. Moreover, we provide a strongly polynomial time algorithm to solve the non-preemptive unweighed case when the jobs have the same processing requirements. For the weighted case, our algorithm can be adapted for solving the non-preemptive version of the problem in pseudo-polynomial time.Comment: submitted to SODA 201
    corecore