110,099 research outputs found

    EPOBF: Energy Efficient Allocation of Virtual Machines in High Performance Computing Cloud

    Full text link
    Cloud computing has become more popular in provision of computing resources under virtual machine (VM) abstraction for high performance computing (HPC) users to run their applications. A HPC cloud is such cloud computing environment. One of challenges of energy efficient resource allocation for VMs in HPC cloud is tradeoff between minimizing total energy consumption of physical machines (PMs) and satisfying Quality of Service (e.g. performance). On one hand, cloud providers want to maximize their profit by reducing the power cost (e.g. using the smallest number of running PMs). On the other hand, cloud customers (users) want highest performance for their applications. In this paper, we focus on the scenario that scheduler does not know global information about user jobs and user applications in the future. Users will request shortterm resources at fixed start times and non interrupted durations. We then propose a new allocation heuristic (named Energy-aware and Performance per watt oriented Bestfit (EPOBF)) that uses metric of performance per watt to choose which most energy-efficient PM for mapping each VM (e.g. maximum of MIPS per Watt). Using information from Feitelson's Parallel Workload Archive to model HPC jobs, we compare the proposed EPOBF to state of the art heuristics on heterogeneous PMs (each PM has multicore CPU). Simulations show that the EPOBF can reduce significant total energy consumption in comparison with state of the art allocation heuristics.Comment: 10 pages, in Procedings of International Conference on Advanced Computing and Applications, Journal of Science and Technology, Vietnamese Academy of Science and Technology, ISSN 0866-708X, Vol. 51, No. 4B, 201

    Approximation Algorithms for Energy Minimization in Cloud Service Allocation under Reliability Constraints

    Get PDF
    We consider allocation problems that arise in the context of service allocation in Clouds. More specifically, we assume on the one part that each computing resource is associated to a capacity constraint, that can be chosen using Dynamic Voltage and Frequency Scaling (DVFS) method, and to a probability of failure. On the other hand, we assume that the service runs as a set of independent instances of identical Virtual Machines. Moreover, there exists a Service Level Agreement (SLA) between the Cloud provider and the client that can be expressed as follows: the client comes with a minimal number of service instances which must be alive at the end of the day, and the Cloud provider offers a list of pairs (price,compensation), this compensation being paid by the Cloud provider if it fails to keep alive the required number of services. On the Cloud provider side, each pair corresponds actually to a guaranteed success probability of fulfilling the constraint on the minimal number of instances. In this context, given a minimal number of instances and a probability of success, the question for the Cloud provider is to find the number of necessary resources, their clock frequency and an allocation of the instances (possibly using replication) onto machines. This solution should satisfy all types of constraints during a given time period while minimizing the energy consumption of used resources. We consider two energy consumption models based on DVFS techniques, where the clock frequency of physical resources can be changed. For each allocation problem and each energy model, we prove deterministic approximation ratios on the consumed energy for algorithms that provide guaranteed probability failures, as well as an efficient heuristic, whose energy ratio is not guaranteed

    A survey of energy saving techniques for mobile computers

    Get PDF
    Portable products such as pagers, cordless and digital cellular telephones, personal audio equipment, and laptop computers are increasingly being used. Because these applications are battery powered, reducing power consumption is vital. In this report we first give a survey of techniques for accomplishing energy reduction on the hardware level such as: low voltage components, use of sleep or idle modes, dynamic control of the processor clock frequency, clocking regions, and disabling unused peripherals. System- design techniques include minimizing external accesses, minimizing logic state transitions, and system partitioning using application-specific coprocessors. Then we review energy reduction techniques in the design of operating systems, including communication protocols, caching, scheduling and QoS management. Finally, we give an overview of policies to optimize the code of the application for energy consumption and make it aware of power management functions. Applications play a critical role in the user's experience of a power-managed system. Therefore, the application and the operating system must allow a user to control the power management. Remarkably, it appears that some energy preserving techniques not only lead to a reduced energy consumption, but also to more performance

    Energy challenges for ICT

    Get PDF
    The energy consumption from the expanding use of information and communications technology (ICT) is unsustainable with present drivers, and it will impact heavily on the future climate change. However, ICT devices have the potential to contribute signi - cantly to the reduction of CO2 emission and enhance resource e ciency in other sectors, e.g., transportation (through intelligent transportation and advanced driver assistance systems and self-driving vehicles), heating (through smart building control), and manu- facturing (through digital automation based on smart autonomous sensors). To address the energy sustainability of ICT and capture the full potential of ICT in resource e - ciency, a multidisciplinary ICT-energy community needs to be brought together cover- ing devices, microarchitectures, ultra large-scale integration (ULSI), high-performance computing (HPC), energy harvesting, energy storage, system design, embedded sys- tems, e cient electronics, static analysis, and computation. In this chapter, we introduce challenges and opportunities in this emerging eld and a common framework to strive towards energy-sustainable ICT
    • 

    corecore