7,067 research outputs found

    Speed-scaling with no Preemptions

    Full text link
    We revisit the non-preemptive speed-scaling problem, in which a set of jobs have to be executed on a single or a set of parallel speed-scalable processor(s) between their release dates and deadlines so that the energy consumption to be minimized. We adopt the speed-scaling mechanism first introduced in [Yao et al., FOCS 1995] according to which the power dissipated is a convex function of the processor's speed. Intuitively, the higher is the speed of a processor, the higher is the energy consumption. For the single-processor case, we improve the best known approximation algorithm by providing a (1+ϵ)αB~α(1+\epsilon)^{\alpha}\tilde{B}_{\alpha}-approximation algorithm, where B~α\tilde{B}_{\alpha} is a generalization of the Bell number. For the multiprocessor case, we present an approximation algorithm of ratio B~α((1+ϵ)(1+wmaxwmin))α\tilde{B}_{\alpha}((1+\epsilon)(1+\frac{w_{\max}}{w_{\min}}))^{\alpha} improving the best known result by a factor of (52)α1(wmaxwmin)α(\frac{5}{2})^{\alpha-1}(\frac{w_{\max}}{w_{\min}})^{\alpha}. Notice that our result holds for the fully heterogeneous environment while the previous known result holds only in the more restricted case of parallel processors with identical power functions

    Energy-efficient algorithms for non-preemptive speed-scaling

    Full text link
    We improve complexity bounds for energy-efficient speed scheduling problems for both the single processor and multi-processor cases. Energy conservation has become a major concern, so revisiting traditional scheduling problems to take into account the energy consumption has been part of the agenda of the scheduling community for the past few years. We consider the energy minimizing speed scaling problem introduced by Yao et al. where we wish to schedule a set of jobs, each with a release date, deadline and work volume, on a set of identical processors. The processors may change speed as a function of time and the energy they consume is the α\alphath power of its speed. The objective is then to find a feasible schedule which minimizes the total energy used. We show that in the setting with an arbitrary number of processors where all work volumes are equal, there is a 2(1+ε)(5(1+ε))α1B~α=Oα(1)2(1+\varepsilon)(5(1+\varepsilon))^{\alpha -1}\tilde{B}_{\alpha}=O_{\alpha}(1) approximation algorithm, where B~α\tilde{B}_{\alpha} is the generalized Bell number. This is the first constant factor algorithm for this problem. This algorithm extends to general unequal processor-dependent work volumes, up to losing a factor of ((1+r)r2)α(\frac{(1+r)r}{2})^{\alpha} in the approximation, where rr is the maximum ratio between two work volumes. We then show this latter problem is APX-hard, even in the special case when all release dates and deadlines are equal and rr is 4. In the single processor case, we introduce a new linear programming formulation of speed scaling and prove that its integrality gap is at most 12α112^{\alpha -1}. As a corollary, we obtain a (12(1+ε))α1(12(1+\varepsilon))^{\alpha -1} approximation algorithm where there is a single processor, improving on the previous best bound of 2α1(1+ε)αB~α2^{\alpha-1}(1+\varepsilon)^{\alpha}\tilde{B}_{\alpha} when α25\alpha \ge 25

    The energy scheduling problem: Industrial case-study and constraint propagation techniques

    Get PDF
    This paper deals with production scheduling involving energy constraints, typically electrical energy. We start by an industrial case-study for which we propose a two-step integer/constraint programming method. From the industrial problem we derive a generic problem,the Energy Scheduling Problem (EnSP). We propose an extension of specific resource constraint propagation techniques to efficiently prune the search space for EnSP solving. We also present a branching scheme to solve the problem via tree search.Finally,computational results are provided

    Energy Efficient Scheduling and Routing via Randomized Rounding

    Get PDF
    We propose a unifying framework based on configuration linear programs and randomized rounding, for different energy optimization problems in the dynamic speed-scaling setting. We apply our framework to various scheduling and routing problems in heterogeneous computing and networking environments. We first consider the energy minimization problem of scheduling a set of jobs on a set of parallel speed scalable processors in a fully heterogeneous setting. For both the preemptive-non-migratory and the preemptive-migratory variants, our approach allows us to obtain solutions of almost the same quality as for the homogeneous environment. By exploiting the result for the preemptive-non-migratory variant, we are able to improve the best known approximation ratio for the single processor non-preemptive problem. Furthermore, we show that our approach allows to obtain a constant-factor approximation algorithm for the power-aware preemptive job shop scheduling problem. Finally, we consider the min-power routing problem where we are given a network modeled by an undirected graph and a set of uniform demands that have to be routed on integral routes from their sources to their destinations so that the energy consumption is minimized. We improve the best known approximation ratio for this problem.Comment: 27 page

    Energy Efficient Scheduling of MapReduce Jobs

    Full text link
    MapReduce is emerged as a prominent programming model for data-intensive computation. In this work, we study power-aware MapReduce scheduling in the speed scaling setting first introduced by Yao et al. [FOCS 1995]. We focus on the minimization of the total weighted completion time of a set of MapReduce jobs under a given budget of energy. Using a linear programming relaxation of our problem, we derive a polynomial time constant-factor approximation algorithm. We also propose a convex programming formulation that we combine with standard list scheduling policies, and we evaluate their performance using simulations.Comment: 22 page

    Power efficient job scheduling by predicting the impact of processor manufacturing variability

    Get PDF
    Modern CPUs suffer from performance and power consumption variability due to the manufacturing process. As a result, systems that do not consider such variability caused by manufacturing issues lead to performance degradations and wasted power. In order to avoid such negative impact, users and system administrators must actively counteract any manufacturing variability. In this work we show that parallel systems benefit from taking into account the consequences of manufacturing variability when making scheduling decisions at the job scheduler level. We also show that it is possible to predict the impact of this variability on specific applications by using variability-aware power prediction models. Based on these power models, we propose two job scheduling policies that consider the effects of manufacturing variability for each application and that ensure that power consumption stays under a system-wide power budget. We evaluate our policies under different power budgets and traffic scenarios, consisting of both single- and multi-node parallel applications, utilizing up to 4096 cores in total. We demonstrate that they decrease job turnaround time, compared to contemporary scheduling policies used on production clusters, up to 31% while saving up to 5.5% energy.Postprint (author's final draft

    A survey of offline algorithms for energy minimization under deadline constraints

    Get PDF
    Modern computers allow software to adjust power management settings like speed and sleep modes to decrease the power consumption, possibly at the price of a decreased performance. The impact of these techniques mainly depends on the schedule of the tasks. In this article, a survey on underlying theoretical results on power management, as well as offline scheduling algorithms that aim at minimizing the energy consumption under real-time constraints, is given

    Verification and control of partially observable probabilistic systems

    Get PDF
    We present automated techniques for the verification and control of partially observable, probabilistic systems for both discrete and dense models of time. For the discrete-time case, we formally model these systems using partially observable Markov decision processes; for dense time, we propose an extension of probabilistic timed automata in which local states are partially visible to an observer or controller. We give probabilistic temporal logics that can express a range of quantitative properties of these models, relating to the probability of an event’s occurrence or the expected value of a reward measure. We then propose techniques to either verify that such a property holds or synthesise a controller for the model which makes it true. Our approach is based on a grid-based abstraction of the uncountable belief space induced by partial observability and, for dense-time models, an integer discretisation of real-time behaviour. The former is necessarily approximate since the underlying problem is undecidable, however we show how both lower and upper bounds on numerical results can be generated. We illustrate the effectiveness of the approach by implementing it in the PRISM model checker and applying it to several case studies from the domains of task and network scheduling, computer security and planning
    corecore