48,960 research outputs found

    Energy-Efficient Multiprocessor Scheduling for Flow Time and Makespan

    Full text link
    We consider energy-efficient scheduling on multiprocessors, where the speed of each processor can be individually scaled, and a processor consumes power sαs^{\alpha} when running at speed ss, for α>1\alpha>1. A scheduling algorithm needs to decide at any time both processor allocations and processor speeds for a set of parallel jobs with time-varying parallelism. The objective is to minimize the sum of the total energy consumption and certain performance metric, which in this paper includes total flow time and makespan. For both objectives, we present instantaneous parallelism clairvoyant (IP-clairvoyant) algorithms that are aware of the instantaneous parallelism of the jobs at any time but not their future characteristics, such as remaining parallelism and work. For total flow time plus energy, we present an O(1)O(1)-competitive algorithm, which significantly improves upon the best known non-clairvoyant algorithm and is the first constant competitive result on multiprocessor speed scaling for parallel jobs. In the case of makespan plus energy, which is considered for the first time in the literature, we present an O(ln⁥1−1/αP)O(\ln^{1-1/\alpha}P)-competitive algorithm, where PP is the total number of processors. We show that this algorithm is asymptotically optimal by providing a matching lower bound. In addition, we also study non-clairvoyant scheduling for total flow time plus energy, and present an algorithm that achieves O(ln⁥P)O(\ln P)-competitive for jobs with arbitrary release time and O(ln⁥1/αP)O(\ln^{1/\alpha}P)-competitive for jobs with identical release time. Finally, we prove an Ω(ln⁥1/αP)\Omega(\ln^{1/\alpha}P) lower bound on the competitive ratio of any non-clairvoyant algorithm, matching the upper bound of our algorithm for jobs with identical release time

    Energy-Efficient Scheduling for Homogeneous Multiprocessor Systems

    Get PDF
    We present a number of novel algorithms, based on mathematical optimization formulations, in order to solve a homogeneous multiprocessor scheduling problem, while minimizing the total energy consumption. In particular, for a system with a discrete speed set, we propose solving a tractable linear program. Our formulations are based on a fluid model and a global scheduling scheme, i.e. tasks are allowed to migrate between processors. The new methods are compared with three global energy/feasibility optimal workload allocation formulations. Simulation results illustrate that our methods achieve both feasibility and energy optimality and outperform existing methods for constrained deadline tasksets. Specifically, the results provided by our algorithm can achieve up to an 80% saving compared to an algorithm without a frequency scaling scheme and up to 70% saving compared to a constant frequency scaling scheme for some simulated tasksets. Another benefit is that our algorithms can solve the scheduling problem in one step instead of using a recursive scheme. Moreover, our formulations can solve a more general class of scheduling problems, i.e. any periodic real-time taskset with arbitrary deadline. Lastly, our algorithms can be applied to both online and offline scheduling schemes.Comment: Corrected typos: definition of J_i in Section 2.1; (3b)-(3c); definition of \Phi_A and \Phi_D in paragraph after (6b). Previous equations were correct only for special case of p_i=d_

    On Reliability-Aware Server Consolidation in Cloud Datacenters

    Full text link
    In the past few years, datacenter (DC) energy consumption has become an important issue in technology world. Server consolidation using virtualization and virtual machine (VM) live migration allows cloud DCs to improve resource utilization and hence energy efficiency. In order to save energy, consolidation techniques try to turn off the idle servers, while because of workload fluctuations, these offline servers should be turned on to support the increased resource demands. These repeated on-off cycles could affect the hardware reliability and wear-and-tear of servers and as a result, increase the maintenance and replacement costs. In this paper we propose a holistic mathematical model for reliability-aware server consolidation with the objective of minimizing total DC costs including energy and reliability costs. In fact, we try to minimize the number of active PMs and racks, in a reliability-aware manner. We formulate the problem as a Mixed Integer Linear Programming (MILP) model which is in form of NP-complete. Finally, we evaluate the performance of our approach in different scenarios using extensive numerical MATLAB simulations.Comment: International Symposium on Parallel and Distributed Computing (ISPDC), Innsbruck, Austria, 201

    An Energy Driven Architecture for Wireless Sensor Networks

    Full text link
    Most wireless sensor networks operate with very limited energy sources-their batteries, and hence their usefulness in real life applications is severely constrained. The challenging issues are how to optimize the use of their energy or to harvest their own energy in order to lengthen their lives for wider classes of application. Tackling these important issues requires a robust architecture that takes into account the energy consumption level of functional constituents and their interdependency. Without such architecture, it would be difficult to formulate and optimize the overall energy consumption of a wireless sensor network. Unlike most current researches that focus on a single energy constituent of WSNs independent from and regardless of other constituents, this paper presents an Energy Driven Architecture (EDA) as a new architecture and indicates a novel approach for minimising the total energy consumption of a WS

    A survey of energy saving techniques for mobile computers

    Get PDF
    Portable products such as pagers, cordless and digital cellular telephones, personal audio equipment, and laptop computers are increasingly being used. Because these applications are battery powered, reducing power consumption is vital. In this report we first give a survey of techniques for accomplishing energy reduction on the hardware level such as: low voltage components, use of sleep or idle modes, dynamic control of the processor clock frequency, clocking regions, and disabling unused peripherals. System- design techniques include minimizing external accesses, minimizing logic state transitions, and system partitioning using application-specific coprocessors. Then we review energy reduction techniques in the design of operating systems, including communication protocols, caching, scheduling and QoS management. Finally, we give an overview of policies to optimize the code of the application for energy consumption and make it aware of power management functions. Applications play a critical role in the user's experience of a power-managed system. Therefore, the application and the operating system must allow a user to control the power management. Remarkably, it appears that some energy preserving techniques not only lead to a reduced energy consumption, but also to more performance
    • 

    corecore