27,312 research outputs found

    A Survey of Green Networking Research

    Full text link
    Reduction of unnecessary energy consumption is becoming a major concern in wired networking, because of the potential economical benefits and of its expected environmental impact. These issues, usually referred to as "green networking", relate to embedding energy-awareness in the design, in the devices and in the protocols of networks. In this work, we first formulate a more precise definition of the "green" attribute. We furthermore identify a few paradigms that are the key enablers of energy-aware networking research. We then overview the current state of the art and provide a taxonomy of the relevant work, with a special focus on wired networking. At a high level, we identify four branches of green networking research that stem from different observations on the root causes of energy waste, namely (i) Adaptive Link Rate, (ii) Interface proxying, (iii) Energy-aware infrastructures and (iv) Energy-aware applications. In this work, we do not only explore specific proposals pertaining to each of the above branches, but also offer a perspective for research.Comment: Index Terms: Green Networking; Wired Networks; Adaptive Link Rate; Interface Proxying; Energy-aware Infrastructures; Energy-aware Applications. 18 pages, 6 figures, 2 table

    JEERP: Energy Aware Enterprise Resource Planning

    Get PDF
    Ever increasing energy costs, and saving requirements, especially in enterprise contexts, are pushing the limits of Enterprise Resource Planning to better account energy, with component-level asset granularity. Using an application-oriented approach we discuss the different aspects involved in designing Energy Aware ERPs and we show a prototypical open source implementation based on the Dog Domotic Gateway and the Oratio ER

    Energy-Aware Cloud Management through Progressive SLA Specification

    Full text link
    Novel energy-aware cloud management methods dynamically reallocate computation across geographically distributed data centers to leverage regional electricity price and temperature differences. As a result, a managed VM may suffer occasional downtimes. Current cloud providers only offer high availability VMs, without enough flexibility to apply such energy-aware management. In this paper we show how to analyse past traces of dynamic cloud management actions based on electricity prices and temperatures to estimate VM availability and price values. We propose a novel SLA specification approach for offering VMs with different availability and price values guaranteed over multiple SLAs to enable flexible energy-aware cloud management. We determine the optimal number of such SLAs as well as their availability and price guaranteed values. We evaluate our approach in a user SLA selection simulation using Wikipedia and Grid'5000 workloads. The results show higher customer conversion and 39% average energy savings per VM.Comment: 14 pages, conferenc

    Energy-aware virtual machine consolidation for cloud data centers

    Get PDF
    One of the issues in virtual machine consolidation (VMC) in cloud data centers is categorizing different workloads to classify the state of physical servers. In this paper, we propose a new scheme of host's load categorization in energy-performance VMC framework to reduce energy consumption while meeting the quality of service (QoS) requirement. Specifically the under loaded hosts are classified into three further states, i.e., Under loaded, normal and critical by applying the under load detection algorithm. We also design overload detection and virtual machine (VM) selection policies. The simulation results show that the proposed policies outperform the existing policies in Cloud Sim in terms of both energy and service level agreements violation (SLAV) reduction

    Energy-aware dynamic pricing model for cloud environments

    Get PDF
    Energy consumption is a critical operational cost for Cloud providers. However, as commercial providers typically use fixed pricing schemes that are oblivious about the energy costs of running virtual machines, clients are not charged according to their actual energy impact. Some works have proposed energy-aware cost models that are able to capture each client’s real energy usage. However, those models cannot be naturally used for pricing Cloud services, as the energy cost is calculated after the termination of the service, and it depends on decisions taken by the provider, such as the actual placement of the client’s virtual machines. For those reasons, a client cannot estimate in advance how much it will pay. This paper presents a pricing model for virtualized Cloud providers that dynamically derives the energy costs per allocation unit and per work unit for each time period. They account for the energy costs of the provider’s static and dynamic energy consumption by sharing out them according to the virtual resource allocation and the real resource usage of running virtual machines for the corresponding time period. Newly arrived clients during that period can use these costs as a baseline to calculate their expenses in advance as a function of the number of requested allocation and work units. Our results show that providers can get comparable revenue to traditional pricing schemes, while offering to the clients more proportional prices than fixed-price models.Peer ReviewedPostprint (author's final draft

    Energy-aware Load Balancing Policies for the Cloud Ecosystem

    Full text link
    The energy consumption of computer and communication systems does not scale linearly with the workload. A system uses a significant amount of energy even when idle or lightly loaded. A widely reported solution to resource management in large data centers is to concentrate the load on a subset of servers and, whenever possible, switch the rest of the servers to one of the possible sleep states. We propose a reformulation of the traditional concept of load balancing aiming to optimize the energy consumption of a large-scale system: {\it distribute the workload evenly to the smallest set of servers operating at an optimal energy level, while observing QoS constraints, such as the response time.} Our model applies to clustered systems; the model also requires that the demand for system resources to increase at a bounded rate in each reallocation interval. In this paper we report the VM migration costs for application scaling.Comment: 10 Page
    corecore