15 research outputs found

    EPOBF: Energy Efficient Allocation of Virtual Machines in High Performance Computing Cloud

    Full text link
    Cloud computing has become more popular in provision of computing resources under virtual machine (VM) abstraction for high performance computing (HPC) users to run their applications. A HPC cloud is such cloud computing environment. One of challenges of energy efficient resource allocation for VMs in HPC cloud is tradeoff between minimizing total energy consumption of physical machines (PMs) and satisfying Quality of Service (e.g. performance). On one hand, cloud providers want to maximize their profit by reducing the power cost (e.g. using the smallest number of running PMs). On the other hand, cloud customers (users) want highest performance for their applications. In this paper, we focus on the scenario that scheduler does not know global information about user jobs and user applications in the future. Users will request shortterm resources at fixed start times and non interrupted durations. We then propose a new allocation heuristic (named Energy-aware and Performance per watt oriented Bestfit (EPOBF)) that uses metric of performance per watt to choose which most energy-efficient PM for mapping each VM (e.g. maximum of MIPS per Watt). Using information from Feitelson's Parallel Workload Archive to model HPC jobs, we compare the proposed EPOBF to state of the art heuristics on heterogeneous PMs (each PM has multicore CPU). Simulations show that the EPOBF can reduce significant total energy consumption in comparison with state of the art allocation heuristics.Comment: 10 pages, in Procedings of International Conference on Advanced Computing and Applications, Journal of Science and Technology, Vietnamese Academy of Science and Technology, ISSN 0866-708X, Vol. 51, No. 4B, 201

    Energy efficiency embedded service lifecycle: Towards an energy efficient cloud computing architecture

    Get PDF
    The paper argues the need to provide novel methods and tools to support software developers aiming to optimise energy efficiency and minimise the carbon footprint resulting from designing, developing, deploying and running software in Clouds, while maintaining other quality aspects of software to adequate and agreed levels. A cloud architecture to support energy efficiency at service construction, deployment, and operation is discussed, as well as its implementation and evaluation plans.Postprint (published version

    Testes de software na redução do consumo energético dos sistemas de informação

    Get PDF
    A mudança climática não pode ser desmentida. Desde que a humanidade adquiriu o conhecimento da eletricidade que tem transformado todo o seu modo de viver em seu redor. Para a produção de eletricidade recorre-se, em parte, à combustão de materiais que libertam CO2 e que, pela quantidade emitida, potencialmente degrada o ambiente. Noutra perspetiva, em certas zonas geográficas, o acesso à energia elétrica é escasso. É neste contexto que surge a profissão de Software Testing. Neste artigo procura-se quantificar o contributo dos testes de software no desenvolvimento de aplicações que tenham em conta um consumo energético mais reduzido. Para atingir esse objetivo é proposta e aplicada uma metodologia para a medição de consumo e é definida uma fórmula matemática para apuramento da viabilidade económica dos testes. As consequências de um software que consuma menos energia serão de três ordens: a ambiental – na redução da emissão de CO2; a humana – pela possibilidade de mais pessoas utilizarem a capacidade energética instalada; e a financeira – na redução direta do custo do consumo.Climate change cannot be ignored any longer. Since humanity has developed the knowledge of electricity and has shaped his existence in a way that we are all dependent on it. The generation of electricity is linked to the combustion of products that release CO2 into the atmosphere which has the potential to degrade the environment. On the other hand, some countries have a restricted access to electricity. It is in this context that a new profession emerges – the software testing. This article aims at quantifying the contribution of Software Testing in the energy consumption of software. To achieve this goal it is proposed and applied a simplified methodology for measuring consumption of software and it is defined a mathematical formula to calculate the economic viability of the tests. The consequences of an optimized software in its energy consumption will be: environmental – by reducing CO2 emissions; human – by allowing more people to use the already installed power capacity; and financial – by reduction of economical expenditure

    Modeling Data Center Building Blocks for Energy-efficiency and Thermal Simulations

    Get PDF
    International audienceIn this paper we present a concept and specification of Data Center Efficiency Building Blocks (DEBBs), which represent hardware components of a data center complemented by descriptions of their energy efficiency. Proposed building blocks contain hardware and thermodynamic models that can be applied to simulate a data center and to evaluate its energy efficiency. DEBBs are available in an open repository being built by the CoolEmAll project. In the paper we illustrate the concept by an example of DEBB defined for the RECS multi-server system including models of its power usage and thermodynamic properties. We also show how these models are affected by specific architecture of modeled hardware and differences between various classes of applications. Proposed models are verified by a comparison to measurements on a real infrastructure. Finally, we demonstrate how DEBBs are used in data center simulations

    Towards an interoperable energy efficient Cloud computing architecture-practice & experience

    Get PDF
    The energy consumption of Cloud computing continues to be an area of significant concern as data center growth continues to increase. This paper reports on an energy efficient interoperable Cloud architecture realized as a Cloud toolbox that focuses on reducing the energy consumption of Cloud applications holistically across all deployments models. The architecture supports energy efficiency at service construction, deployment, and operation and interoperability through the use of the Open Virtualization Format (OVF) standard. We discuss our practical experience during implementation and present an initial performance evaluation of the architecture. The results show that the implementing Cloud provider interoperability is feasible and incurs minimal performance overhead during application deployment in comparison to the time taken to instantiate Virtual Machines

    EASAHE, um algoritmo para o agendamento de trabalhos em ferramentas de processamento de dados com preocupações de eficiência energética

    Get PDF
    As ferramentas de processamento de dados massivos em ambientes distribuídos como o Spark ou Dask permitem aos programadores efectuarem processamento sobre quantidades massivas de dados em grandes clusters. As ferramentas atuais utilizam algoritmos simples para o agendamento eficiente de trabalhos de processamento de dados em computação distribuída, recorrendo a heurísticas sem ter em conta as características da carga de trabalho. Trabalho recente explora o agendamento eficiente de trabalhos de processamento de dados em computação distribuída. Neste artigo propomos um novo algoritmo para o agendamento de trabalhos para ferramentas de processamento de dados massivos com preocupações de eficiência energética. A implementação num simulador e avaliação usando traces de execuções reais e sintéticas em Spark, demonstram que o algoritmo consegue reduzir o consumo energético em até 11.5%, além de conseguir reduzir o tempo de execução dos trabalhos em até 11.9%, sem grande impacto no tempo gasto no agendamento

    Energy-aware task scheduling in data centers using an application signature

    Get PDF
    Data centers are power hungry facilities. Energy-aware task scheduling approaches are of utmost importance to improve energy savings in data centers, although they need to know beforehand the energy consumption of the applications that will run in the servers. This is usually done through a full profiling of the applications, which is not feasible in long-running application scenarios due to the long execution times. In the present work we use an application signature that allows to estimate the energy without the need to execute the application completely. We use different scheduling approaches together with the information of the application signature to improve the makespan of the scheduling process and therefore improve the energy savings in data centers. We evaluate the accuracy of using the application signature by means of comparing against an oracle method obtaining an error below 1.5%, and Compression Ratios around 39.7 to 45.8

    Maximizing Throughput of Overprovisioned HPC Data Centers Under a Strict Power Budget

    Full text link
    Abstract—Building future generation supercomputers while constraining their power consumption is one of the biggest challenges faced by the HPC community. For example, US Department of Energy has set a goal of 20 MW for an exascale (1018 flops) supercomputer. To realize this goal, a lot of research is being done to revolutionize hardware design to build power efficient computers and network interconnects. In this work, we propose a software-based online resource management system that leverages hardware facilitated capability to constrain the power consumption of each node in order to optimally allocate power and nodes to a job. Our scheme uses this hardware capability in conjunction with an adaptive runtime system that can dynamically change the resource configuration of a running job allowing our resource manager to re-optimize allocation decisions to running jobs as new jobs arrive, or a running job terminates. We also propose a performance modeling scheme that esti-mates the essential power characteristics of a job at any scale. The proposed online resource manager uses these performance characteristics for making scheduling and resource allocation decisions that maximize the job throughput of the supercomputer under a given power budget. We demonstrate the benefits of our approach by using a mix of jobs with different power-response characteristics. We show that with a power budget of 4.75 MW, we can obtain up to 5.2X improvement in job throughput when compared with the SLURM scheduling policy that is power-unaware. We corroborate our results with real experiments on a relatively small scale cluster, in which we obtain a 1.7X improvement. I

    Uma abordagem de seleção de recursos consciente de consumo de energia baseada em topologia de rede, tamanho de arquivos e potência de equipamentos

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2015.Recentes avanços na área da Computação de Alto Desempenho (HPC) tem gerado uma grande variedade de possibilidades para pesquisas na área. Arquiteturas paralelas e distribuídas modernas apresentam um aumento considerável em sua capacidade de processamento. Entretanto, esse crescimento de desempenho é acompanhado por um aumento de consumo de energia. Neste cenário, a comunidade científica tem estudado técnicas voltadas à redução de consumo de energia em tais plataformas. Arquiteturas de alto desempenho são amplamente utilizadas em ambientes empresarial e acadêmico quando há a necessidade de grande poder computacional. Recentemente, infraestruturas legadas têm sido adaptadas ao modelo de nuvem computacional, o qual fornece recursos sob demanda e permite a usuários contratar serviços de infraestrutura, plataforma e software. Neste trabalho propomos uma abordagem genérica de alocação de recursos energeticamente eficiente que melhora a eficiência energética de ambientes de alto desempenho heterogêneos selecionando recursos menos custosos. A abordagem proposta considera o custo para transferência de dados, assim como o estado e eficiência energética dos nodos computacionais. Após realizados diversos experimentos em um ambiente simulado de nuvem, concluiu-se que, em alguns casos, a abordagem proposta reduz consideravelmente o consumo de energia em comparação com abordagens existentes na literatura.Abstract : Recent advances in High Performance Computing (HPC) have led to a wide range of new possibilities for research. In this context, modern parallel and distributed architectures have presented a steady increase in their processing capabilities. However, such growth is usually followed by an increase in energy consumption. Because of that, the research community has been focusing on techniques to reduce energy consumption on such platforms. HPC architectures are now widely used in business and academic environments when high computing power is crucial. Recently, legacy structures have been adapted to the cloud computing model, which provides resources on demand such as infrastructure, software or platform. In this work we propose a generic energy-efficient scheduling approach that improves the energy efficiency of high performance heterogeneous environments by selecting the least costly resources. The proposed approach takes into consideration the cost of data transfers as well as the state and energy efficiency of computing nodes. After carrying out several experiments in a cloud simulated environment we concluded that, in some cases, the proposed approach achieves considerably better energy efficiency than other existing approaches in the literature

    Developing New Power Management and High-Reliability Schemes in Data-Intensive Environment

    Get PDF
    With the increasing popularity of data-intensive applications as well as the large-scale computing and storage systems, current data centers and supercomputers are often dealing with extremely large data-sets. To store and process this huge amount of data reliably and energy-efficiently, three major challenges should be taken into consideration for the system designers. Firstly, power conservation–Multicore processors or CMPs have become a mainstream in the current processor market because of the tremendous improvement in transistor density and the advancement in semiconductor technology. However, the increasing number of transistors on a single die or chip reveals a super-linear growth in power consumption [4]. Thus, how to balance system performance and power-saving is a critical issue which needs to be solved effectively. Secondly, system reliability–Reliability is a critical metric in the design and development of replication-based big data storage systems such as Hadoop File System (HDFS). In the system with thousands machines and storage devices, even in-frequent failures become likely. In Google File System, the annual disk failure rate is 2:88%,which means you were expected to see 8,760 disk failures in a year. Unfortunately, given an increasing number of node failures, how often a cluster starts losing data when being scaled out is not well investigated. Thirdly, energy efficiency–The fast processing speeds of the current generation of supercomputers provide a great convenience to scientists dealing with extremely large data sets. The next generation of exascale supercomputers could provide accurate simulation results for the automobile industry, aerospace industry, and even nuclear fusion reactors for the very first time. However, the energy cost of super-computing is extremely high, with a total electricity bill of 9 million dollars per year. Thus, conserving energy and increasing the energy efficiency of supercomputers has become critical in recent years. This dissertation proposes new solutions to address the above three key challenges for current large-scale storage and computing systems. Firstly, we propose a novel power management scheme called MAR (model-free, adaptive, rule-based) in multiprocessor systems to minimize the CPU power consumption subject to performance constraints. By introducing new I/O wait status, MAR is able to accurately describe the relationship between core frequencies, performance and power consumption. Moreover, we adopt a model-free control method to filter out the I/O wait status from the traditional CPU busy/idle model in order to achieve fast responsiveness to burst situations and take full advantage of power saving. Our extensive experiments on a physical testbed demonstrate that, for SPEC benchmarks and data-intensive (TPC-C) benchmarks, an MAR prototype system achieves 95.8-97.8% accuracy of the ideal power saving strategy calculated offline. Compared with baseline solutions, MAR is able to save 12.3-16.1% more power while maintain a comparable performance loss of about 0.78-1.08%. In addition, more simulation results indicate that our design achieved 3.35-14.2% more power saving efficiency and 4.2-10.7% less performance loss under various CMP configurations as compared with various baseline approaches such as LAST, Relax, PID and MPC. Secondly, we create a new reliability model by incorporating the probability of replica loss to investigate the system reliability of multi-way declustering data layouts and analyze their potential parallel recovery possibilities. Our comprehensive simulation results on Matlab and SHARPE show that the shifted declustering data layout outperforms the random declustering layout in a multi-way replication scale-out architecture, in terms of data loss probability and system reliability by upto 63% and 85% respectively. Our study on both 5-year and 10-year system reliability equipped with various recovery bandwidth settings shows that, the shifted declustering layout surpasses the two baseline approaches in both cases by consuming up to 79 % and 87% less recovery bandwidth for copyset, as well as 4.8% and 10.2% less recovery bandwidth for random layout. Thirdly, we develop a power-aware job scheduler by applying a rule based control method and taking into account real world power and speedup profiles to improve power efficiency while adhering to predetermined power constraints. The intensive simulation results shown that our proposed method is able to achieve the maximum utilization of computing resources as compared to baseline scheduling algorithms while keeping the energy cost under the threshold. Moreover, by introducing a Power Performance Factor (PPF) based on the real world power and speedup profiles, we are able to increase the power efficiency by up to 75%
    corecore