23 research outputs found

    Towards energy aware cloud computing application construction

    Get PDF
    The energy consumption of cloud computing continues to be an area of significant concern as data center growth continues to increase. This paper reports on an energy efficient interoperable cloud architecture realised as a cloud toolbox that focuses on reducing the energy consumption of cloud applications holistically across all deployment models. The architecture supports energy efficiency at service construction, deployment and operation. We discuss our practical experience during implementation of an architectural component, the Virtual Machine Image Constructor (VMIC), required to facilitate construction of energy aware cloud applications. We carry out a performance evaluation of the component on a cloud testbed. The results show the performance of Virtual Machine construction, primarily limited by available I/O, to be adequate for agile, energy aware software development. We conclude that the implementation of the VMIC is feasible, incurs minimal performance overhead comparatively to the time taken by other aspects of the cloud application construction life-cycle, and make recommendations on enhancing its performance

    PaaS-IaaS Inter-Layer Adaptation in an Energy-Aware Cloud Environment

    Get PDF
    Cloud computing providers resort to a variety of techniques to improve energy consumption at each level of the cloud computing stack. Most of these techniques consider resource-level energy optimization at IaaS layer. This paper argues energy gains can be obtained by creating a cooperation between the PaaS layer (in charge of hosting the application/service) and the IaaS layer (in charge of handling the computing resources). It presents a novel method based on steering information and decision taking to trigger the PaaS and IaaS layers to adapt their energy mode in service operation, therefore enabling the Cloud stack to actively adapt to changing situations. Experimental results demonstrate such adaptation achieves dynamic energy management in each of the PaaS and IaaS cloud layers

    Energy Efficiency Support through Intra-Layer Cloud Stack Adaptation

    Get PDF
    Energy consumption is a key concern in cloud computing. The paper reports on a cloud architecture to support energy efficiency at service construction, deployment, and operation. This is achieved through SaaS, PaaS and IaaS intra-layer self-adaptation in isolation. The self-adaptation mechanisms are discussed, as well as their implementation and evaluation. The experimental results show that the overall architecture is capable of adapting to meet the energy goals of applications on a per layer basis

    Energy efficiency embedded service lifecycle: Towards an energy efficient cloud computing architecture

    Get PDF
    The paper argues the need to provide novel methods and tools to support software developers aiming to optimise energy efficiency and minimise the carbon footprint resulting from designing, developing, deploying and running software in Clouds, while maintaining other quality aspects of software to adequate and agreed levels. A cloud architecture to support energy efficiency at service construction, deployment, and operation is discussed, as well as its implementation and evaluation plans.Postprint (published version

    Towards an interoperable energy efficient Cloud computing architecture-practice & experience

    Get PDF
    The energy consumption of Cloud computing continues to be an area of significant concern as data center growth continues to increase. This paper reports on an energy efficient interoperable Cloud architecture realized as a Cloud toolbox that focuses on reducing the energy consumption of Cloud applications holistically across all deployments models. The architecture supports energy efficiency at service construction, deployment, and operation and interoperability through the use of the Open Virtualization Format (OVF) standard. We discuss our practical experience during implementation and present an initial performance evaluation of the architecture. The results show that the implementing Cloud provider interoperability is feasible and incurs minimal performance overhead during application deployment in comparison to the time taken to instantiate Virtual Machines

    Rapid and accurate energy models through calibration with IPMI and RAPL

    Get PDF
    Energy consumption in Cloud and High Performance Computing platforms is a significant issue and affects aspects such as the cost of energy and the cooling of the data center. Host level monitoring and prediction provides the groundwork for improving energy efficiency through the placement of workloads. Monitoring must be fast and efficient without unnecessary overhead, to enable scalability. This precludes the use of Watt meters attached per host, requiring alternative approaches such as integrated measurements and models. IPMI and RAPL are subject to error and partial measurement, which may be mitigated. Models allow for prediction and more responsive measures of power consumption, but require calibrating. The causes of calibration error are discussed, along with mitigation strategies, without overly complicating the underlying model. An outcome is a Watt meter emulator that provides hosts level power measurement along with estimated power consumption for a given workload, with an average error of 0.20W

    Energy efficiency embedded service lifecycle: Towards an energy efficient cloud computing architecture

    Get PDF
    The paper argues the need to provide novel methods and tools to support software developers aiming to optimise energy efficiency and minimise the carbon footprint resulting from designing, developing, deploying and running software in Clouds, while maintaining other quality aspects of software to adequate and agreed levels. A cloud architecture to support energy efficiency at service construction, deployment, and operation is discussed, as well as its implementation and evaluation plans

    Dynamic energy-aware scheduling for parallel task-based application in cloud computing

    Get PDF
    Green Computing is a recent trend in computer science, which tries to reduce the energy consumption and carbon footprint produced by computers on distributed platforms such as clusters, grids, and clouds. Traditional scheduling solutions attempt to minimize processing times without taking into account the energetic cost. One of the methods for reducing energy consumption is providing scheduling policies in order to allocate tasks on specific resources that impact over the processing times and energy consumption. In this paper, we propose a real-time dynamic scheduling system to execute efficiently task-based applications on distributed computing platforms in order to minimize the energy consumption. Scheduling tasks on multiprocessors is a well known NP-hard problem and optimal solution of these problems is not feasible, we present a polynomial-time algorithm that combines a set of heuristic rules and a resource allocation technique in order to get good solutions on an affordable time scale. The proposed algorithm minimizes a multi-objective function which combines the energy-consumption and execution time according to the energy-performance importance factor provided by the resource provider or user, also taking into account sequence-dependent setup times between tasks, setup times and down times for virtual machines (VM) and energy profiles for different architectures. A prototype implementation of the scheduler has been tested with different kinds of DAG generated at random as well as on real task-based COMPSs applications. We have tested the system with different size instances and importance factors, and we have evaluated which combination provides a better solution and energy savings. Moreover, we have also evaluated the introduced overhead by measuring the time for getting the scheduling solutions for a different number of tasks, kinds of DAG, and resources, concluding that our method is suitable for run-time scheduling.This work has been supported by the Spanish Government (contracts TIN2015-65316-P, TIN2012-34557, CSD2007-00050, CAC2007-00052 and SEV-2011-00067), by Generalitat de Catalunya (contract 2014-SGR-1051), by the European Commission (Euroserver project, contract 610456) and by Consejo Nacional de Ciencia y Tecnología of Mexico (special program for postdoctoral position BSC-CNS-CONACYT contract 290790, grant number 265937).Peer ReviewedAward-winningPostprint (published version

    Energy Prediction for Cloud Workload Patterns

    Get PDF
    The excessive use of energy consumption in Cloud infrastructures has become one of the major cost factors for Cloud providers to maintain. In order to enhance the energy efficiency of Cloud resources, proactive and reactive management tools are used. However, these tools need to be supported with energy-awareness not only at the physical machine (PM) level but also at virtual machine (VM) level in order to enhance decision-making. This paper introduces an energy-aware profiling model to identify energy consumption for heterogeneous and homogeneous VMs running on the same PM and presents an energy-aware prediction framework to forecast future VMs energy consumption. This framework first predicts the VMs’ workload based on historical workload patterns using Autoregressive Integrated Moving Average (ARIMA) model. The predicted VM workload is then correlated to the physical resources within this framework in order to get the predicted VM energy consumption. Compared with actual results obtained in a real Cloud testbed, the predicted results show that this energy-aware prediction framework can get up to 2.58 Mean Percentage Error (MPE) for the VM workload prediction, and up to −4.47 MPE for the VM energy prediction based on periodic workload pattern

    Energy-aware cost prediction and pricing of virtual machines in cloud computing environments

    Get PDF
    With the increasing cost of electricity, Cloud providers consider energy consumption as one of the major cost factors to be maintained within their infrastructure. Consequently, various proactive and reactive management mechanisms are used to efficiently manage the cloud resources and reduce the energy consumption and cost. These mechanisms support energy-awareness at the level of Physical Machines (PM) as well as Virtual Machines (VM) to make corrective decisions. This paper introduces a novel Cloud system architecture that facilitates an energy aware and efficient cloud operation methodology and presents a cost prediction framework to estimate the total cost of VMs based on their resource usage and power consumption. The evaluation on a Cloud testbed show that the proposed energy-aware cost prediction framework is capable of predicting the workload, power consumption and estimating total cost of the VMs with good prediction accuracy for various Cloud application workload patterns. Furthermore, a set of energy-based pricing schemes are defined, intending to provide the necessary incentives to create an energy-efficient and economically sustainable ecosystem. Further evaluation results show that the adoption of energy-based pricing by cloud and application providers creates additional economic value to both under different market conditions
    corecore