72,643 research outputs found

    Understanding the Impact of Cloud Computing Patterns on Performance and Energy Consumption

    Get PDF
    RÉSUMÉ Les patrons infonuagiques sont des solutions abstraites à des problèmes récurrents de conception dans le domaine de l'infonuagique. Bien que des travaux antérieurs aient prouvé que ces patrons peuvent améliorer la qualité de service des applications infonuagiques, leur impact sur la consommation d'énergie reste encore inconnu. Pourtant l'efficacité énergétique est un défi majeur pour les systèmes infonuagiques. Actuellement, 10\% de l'électricité mondiale est consommée par les serveurs, les ordinateurs portables, les tablettes et les téléphones intelligents. La consommation d'énergie d'un système dépend non seulement de son infrastructure matérielle, mais aussi de ses différentes couches logicielles. Le matériel, le firmware, le système d'exploitation, et les différentes composantes logicielles d'une application infonuagique, contribuent tous à déterminer son empreinte énergétique. De ce fait, pour une meilleure efficacité énergétique, il est important d’améliorer l’efficacité énergétique de toutes les couches matérielles et logicielles du système infonuagique, ce qui inclut les applications déployées dans le système infonuagique. Dans ce mémoire, nous examinons l'impact de six patrons infonuagiques (Local Database proxy, Local Sharding Based Router, Priority Queue, Competing Consumers, Gatekeeper and Pipes and Filters) sur la consommation d'énergie de deux applications multi-traitement et multi-processus déployées dans un système infonuagique. La consommation d'énergie est mesurée avec l’outil Power-API, une interface de programmation d'application (API) écrite en Java et permettant de mesurer la consommation d'énergie au niveau du processus. Les résultats de nos analyses montrent que les patrons étudiés peuvent réduire efficacement la consommation d'énergie d'une application infonuagique, mais pas dans tous les contextes. D'une manière générale, nous prouvons qu'il y a un compromis à faire entre performance et efficacité énergétique, lors du développement d'une application infonuagique. De plus, nos résultats montrent que la migration d'une application vers une architecture de micro-services peut améliorer les performances de l'application, tout en réduisant considérablement sa consommation d'énergie. Nous résumons nos contributions sous forme de recommandations que les développeurs et les architectes logiciels peuvent suivre lors de la conception et la mise en œuvre de leurs applications.----------ABSTRACT Cloud Patterns are abstract solutions to recurrent design problems in the cloud. Previous work has shown that these patterns can improve the Quality of Service (QoS) of cloud applications but their impact on energy consumption is still unknown. Yet, energy consumption is the biggest challenge that cloud computing systems (the backbone of high-tech economy) face today. In fact, 10% of the world’s electricity is now being consumed by servers, laptops, tablets and smart phones. Energy consumption has complex dependencies on the hard- ware platform, and the multiple software layers. The hardware, its firmware, the operating system, and the various software components used by a cloud application, all contribute to determining the energy footprint. Hence, increasing a data center efficiency will eventually improve energy efficiency. Similarly, software itself can affect the internal design of cloud-based applications to optimize hardware utilization to lower energy consumption. In this work, we conduct an empirical study on two multi-processing and multi-threaded cloud- based applications deployed in the cloud, to investigate the individual and the combined impact of six cloud patterns (Local Database proxy, Local Sharding Based Router, Priority Queue, Competing Consumers, Gatekeeper and Pipes and Filters) on the energy consumption. We measure the energy consumption using Power-API; an application programming interface (API) written in Java to monitor the energy consumed at the process-level. Results show that cloud patterns can effectively reduce the energy consumption of a cloud application, but not in all cases. In general, there appear to be a trade-off between an improved response time of the application and the energy consumption. Moreover, our findings show that migrating an application to microservices architecture can improve the performance of the application, while significantly reducing its energy consumption. We summarize our contributions in the form of guidelines that developers and software architects can follow during the implementation of the cloud-based applications

    On the feasibility of collaborative green data center ecosystems

    Get PDF
    The increasing awareness of the impact of the IT sector on the environment, together with economic factors, have fueled many research efforts to reduce the energy expenditure of data centers. Recent work proposes to achieve additional energy savings by exploiting, in concert with customers, service workloads and to reduce data centers’ carbon footprints by adopting demand-response mechanisms between data centers and their energy providers. In this paper, we debate about the incentives that customers and data centers can have to adopt such measures and propose a new service type and pricing scheme that is economically attractive and technically realizable. Simulation results based on real measurements confirm that our scheme can achieve additional energy savings while preserving service performance and the interests of data centers and customers.Peer ReviewedPostprint (author's final draft

    Toward sustainable data centers: a comprehensive energy management strategy

    Get PDF
    Data centers are major contributors to the emission of carbon dioxide to the atmosphere, and this contribution is expected to increase in the following years. This has encouraged the development of techniques to reduce the energy consumption and the environmental footprint of data centers. Whereas some of these techniques have succeeded to reduce the energy consumption of the hardware equipment of data centers (including IT, cooling, and power supply systems), we claim that sustainable data centers will be only possible if the problem is faced by means of a holistic approach that includes not only the aforementioned techniques but also intelligent and unifying solutions that enable a synergistic and energy-aware management of data centers. In this paper, we propose a comprehensive strategy to reduce the carbon footprint of data centers that uses the energy as a driver of their management procedures. In addition, we present a holistic management architecture for sustainable data centers that implements the aforementioned strategy, and we propose design guidelines to accomplish each step of the proposed strategy, referring to related achievements and enumerating the main challenges that must be still solved.Peer ReviewedPostprint (author's final draft

    A methodology for full-system power modeling in heterogeneous data centers

    Get PDF
    The need for energy-awareness in current data centers has encouraged the use of power modeling to estimate their power consumption. However, existing models present noticeable limitations, which make them application-dependent, platform-dependent, inaccurate, or computationally complex. In this paper, we propose a platform-and application-agnostic methodology for full-system power modeling in heterogeneous data centers that overcomes those limitations. It derives a single model per platform, which works with high accuracy for heterogeneous applications with different patterns of resource usage and energy consumption, by systematically selecting a minimum set of resource usage indicators and extracting complex relations among them that capture the impact on energy consumption of all the resources in the system. We demonstrate our methodology by generating power models for heterogeneous platforms with very different power consumption profiles. Our validation experiments with real Cloud applications show that such models provide high accuracy (around 5% of average estimation error).This work is supported by the Spanish Ministry of Economy and Competitiveness under contract TIN2015-65316-P, by the Gener- alitat de Catalunya under contract 2014-SGR-1051, and by the European Commission under FP7-SMARTCITIES-2013 contract 608679 (RenewIT) and FP7-ICT-2013-10 contracts 610874 (AS- CETiC) and 610456 (EuroServer).Peer ReviewedPostprint (author's final draft

    A Compression Technique Exploiting References for Data Synchronization Services

    Get PDF
    Department of Computer Science and EngineeringIn a variety of network applications, there exists significant amount of shared data between two end hosts. Examples include data synchronization services that replicate data from one node to another. Given that shared data may have high correlation with new data to transmit, we question how such shared data can be best utilized to improve the efficiency of data transmission. To answer this, we develop an encoding technique, SyncCoding, that effectively replaces bit sequences of the data to be transmitted with the pointers to their matching bit sequences in the shared data so called references. By doing so, SyncCoding can reduce data traffic, speed up data transmission, and save energy consumption for transmission. Our evaluations of SyncCoding implemented in Linux show that it outperforms existing popular encoding techniques, Brotli, LZMA, Deflate, and Deduplication. The gains of SyncCoding over those techniques in the perspective of data size after compression in a cloud storage scenario are about 12.4%, 20.1%, 29.9%, and 61.2%, and are about 78.3%, 79.6%, 86.1%, and 92.9% in a web browsing scenario, respectively.ope
    corecore