2,502 research outputs found

    A Fully Polynomial-Time Approximation Scheme for Speed Scaling with Sleep State

    Full text link
    We study classical deadline-based preemptive scheduling of tasks in a computing environment equipped with both dynamic speed scaling and sleep state capabilities: Each task is specified by a release time, a deadline and a processing volume, and has to be scheduled on a single, speed-scalable processor that is supplied with a sleep state. In the sleep state, the processor consumes no energy, but a constant wake-up cost is required to transition back to the active state. In contrast to speed scaling alone, the addition of a sleep state makes it sometimes beneficial to accelerate the processing of tasks in order to transition the processor to the sleep state for longer amounts of time and incur further energy savings. The goal is to output a feasible schedule that minimizes the energy consumption. Since the introduction of the problem by Irani et al. [16], its exact computational complexity has been repeatedly posed as an open question (see e.g. [2,8,15]). The currently best known upper and lower bounds are a 4/3-approximation algorithm and NP-hardness due to [2] and [2,17], respectively. We close the aforementioned gap between the upper and lower bound on the computational complexity of speed scaling with sleep state by presenting a fully polynomial-time approximation scheme for the problem. The scheme is based on a transformation to a non-preemptive variant of the problem, and a discretization that exploits a carefully defined lexicographical ordering among schedules

    A Fully Polynomial-Time Approximation Scheme for Speed Scaling with Sleep State

    No full text
    International audienc

    Skeletons and Minimum Energy Scheduling

    Get PDF
    Consider the problem where n jobs, each with a release time, a deadline and a required processing time are to be feasibly scheduled in a single- or multi-processor setting so as to minimize the total energy consumption of the schedule. A processor has two available states: a sleep state where no energy is consumed but also no processing can take place, and an active state which consumes energy at a rate of one, and in which jobs can be processed. Transitioning from the active to the sleep does not incur any further energy cost, but transitioning from the sleep to the active state requires q energy units. Jobs may be preempted and (in the multi-processor case) migrated. The single-processor case of the problem is known to be solvable in polynomial time via an involved dynamic program, whereas the only known approximation algorithm for the multi-processor case attains an approximation factor of 3 and is based on rounding the solution to a linear programming relaxation of the problem. In this work, we present efficient and combinatorial approximation algorithms for both the single- and the multi-processor setting. Before, only an algorithm based on linear programming was known for the multi-processor case. Our algorithms build upon the concept of a skeleton, a basic (and not necessarily feasible) schedule that captures the fact that some processor(s) must be active at some time point during an interval. Finally, we further demonstrate the power of skeletons by providing a 2-approximation algorithm for the multiprocessor case, thus improving upon the recent breakthrough 3-approximation result. Our algorithm is based on a novel rounding scheme of a linear-programming relaxation of the problem which incorporates skeletons

    A survey of offline algorithms for energy minimization under deadline constraints

    Get PDF
    Modern computers allow software to adjust power management settings like speed and sleep modes to decrease the power consumption, possibly at the price of a decreased performance. The impact of these techniques mainly depends on the schedule of the tasks. In this article, a survey on underlying theoretical results on power management, as well as offline scheduling algorithms that aim at minimizing the energy consumption under real-time constraints, is given

    Energy Efficient Scheduling via Partial Shutdown

    Get PDF
    Motivated by issues of saving energy in data centers we define a collection of new problems referred to as "machine activation" problems. The central framework we introduce considers a collection of mm machines (unrelated or related) with each machine ii having an {\em activation cost} of aia_i. There is also a collection of nn jobs that need to be performed, and pi,jp_{i,j} is the processing time of job jj on machine ii. We assume that there is an activation cost budget of AA -- we would like to {\em select} a subset SS of the machines to activate with total cost a(S)Aa(S) \le A and {\em find} a schedule for the nn jobs on the machines in SS minimizing the makespan (or any other metric). For the general unrelated machine activation problem, our main results are that if there is a schedule with makespan TT and activation cost AA then we can obtain a schedule with makespan \makespanconstant T and activation cost \costconstant A, for any ϵ>0\epsilon >0. We also consider assignment costs for jobs as in the generalized assignment problem, and using our framework, provide algorithms that minimize the machine activation and the assignment cost simultaneously. In addition, we present a greedy algorithm which only works for the basic version and yields a makespan of 2T2T and an activation cost A(1+lnn)A (1+\ln n). For the uniformly related parallel machine scheduling problem, we develop a polynomial time approximation scheme that outputs a schedule with the property that the activation cost of the subset of machines is at most AA and the makespan is at most (1+ϵ)T(1+\epsilon) T for any ϵ>0\epsilon >0

    New Results for Non-preemptive Speed Scaling

    No full text
    We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is P(s)=sαP(s) = s^\alpha, where ss is the processing speed, and α>1\alpha > 1 is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption. The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless NPDTIME(2poly(logn))NP \subseteq DTIME(2^{poly(\log n)}). The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a 2α2^\alpha approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs)

    Lagrangian Duality based Algorithms in Online Energy-Efficient Scheduling

    Get PDF
    We study online scheduling problems in the general energy model of speed scaling with power down. The latter is a combination of the two extensively studied energy models, speed scaling and power down, toward a more realistic one. Due to the limits of the current techniques, only few results have been known in the general energy model in contrast to the large literature of the previous ones. In the paper, we consider a Lagrangian duality based approach to design and analyze algorithms in the general energy model. We show the applicability of the approach to problems which are unlikely to admit a convex relaxation. Specifically, we consider the problem of minimizing energy with a single machine in which jobs arrive online and have to be processed before their deadlines. We present an alpha^alpha-competitive algorithm (whose the analysis is tight up to a constant factor) where the energy power function is of typical form z^alpha + g for constants alpha > 2 and g non-negative. Besides, we also consider the problem of minimizing the weighted flow-time plus energy. We give an O(alpha/ln(alpha))-competitive algorithm; that matches (up to a constant factor) to the currently best known algorithm for this problem in the restricted model of speed scaling

    Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

    Full text link
    Massive MIMO is a compelling wireless access concept that relies on the use of an excess number of base-station antennas, relative to the number of active terminals. This technology is a main component of 5G New Radio (NR) and addresses all important requirements of future wireless standards: a great capacity increase, the support of many simultaneous users, and improvement in energy efficiency. Massive MIMO requires the simultaneous processing of signals from many antenna chains, and computational operations on large matrices. The complexity of the digital processing has been viewed as a fundamental obstacle to the feasibility of Massive MIMO in the past. Recent advances on system-algorithm-hardware co-design have led to extremely energy-efficient implementations. These exploit opportunities in deeply-scaled silicon technologies and perform partly distributed processing to cope with the bottlenecks encountered in the interconnection of many signals. For example, prototype ASIC implementations have demonstrated zero-forcing precoding in real time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing of 8 terminals). Coarse and even error-prone digital processing in the antenna paths permits a reduction of consumption with a factor of 2 to 5. This article summarizes the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It illustrates how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes discussed. Open challenges and directions for future research are suggested.Comment: submitted to IEEE transactions on signal processin
    corecore