213 research outputs found

    Impact of Shutdown Techniques for Energy-Efficient Cloud Data Centers

    Get PDF
    International audienceElectricity consumption is a worrying concern in current large-scale systems like datacenters and supercomputers. These infrastructures are often dimensioned according to the workload peak. However, their consumption is not power-proportional: when the workload is low, the consumption is still high. Shutdown techniques have been developed to adapt the number of switched-on servers to the actual workload. However, datacenter operators are reluctant to adopt such approaches because of their potential impact on reactivity and hardware failures, and their energy gain which is often largely misjudged. In this article, we evaluate the potential gain of shutdown techniques by taking into account shutdown and boot up costs in time and energy. This evaluation is made on recent server architectures and future hypothetical energy-aware architectures. We also determine if the knowledge of future is required for saving energy with such techniques. We present simulation results exploiting real traces collected on different infrastructures under various machine configurations with several shutdown policies, with and without workload prediction

    When Clouds become Green: the Green Open Cloud Architecture

    Get PDF
    Virtualization solutions appear as alternative approaches for companies to consolidate their operational services on a physical infrastructure, while preserving specific functionalities inside the Cloud perimeter (e.g., security, fault tolerance, reliability). These consolidation approaches are explored to propose some energy reduction while switching OFF unused computing nodes. We study the impact of virtual machines aggregation in terms of energy consumption. Some load-balancing strategies associated with the migration of virtual machines inside the Cloud infrastructures will be showed. We will present the design of a new original energy-efficient Cloud infrastructure called Green Open Cloud

    A year in the life of a large scale experimental distributed system: the Grid'5000 platform in 2008

    Get PDF
    This report presents the usage results of Grid'5000 over year 2008. Usage of the main operationnal Grid'5000 sites (Bordeaux, Lille, Lyon, Nancy, Orsay, Rennes, Sophia-Antipolis, Toulouse) is presented and analyzed

    Studying the energy consumption of data transfers in Clouds: the Ecofen approach

    Get PDF
    International audienceEnergy consumption is one of the main limiting factors for designing large scale Clouds. Evaluating the energy consumption of Clouds networking architectures and providing multi-level views required by providers and users, is a challenging issue. In this paper, we show how to evaluate and understand network choices (protocols, topologies) in terms of contributions to the energy consumption of the global Cloud infrastructures. By applying the ECOFEN model (Energy Consumption mOdel For End-to-end Networks) and the corresponding simulation framework, we profile and analyze the energy consumption of data transfers in Clouds

    Energy-efficient bandwidth reservation for bulk data transfers in dedicated wired networks

    Get PDF
    International audienceThe ever increasing number of Internet connected end-hosts call for high performance end-to-end networks leading to an increase in the energy consumed by the networks. Our work deals with the energy consumption issue in dedicated network with bandwidth provisionning and in-advance reservations of network equipments and bandwidth for Bulk Data transfers. First, we propose an end-to-end energy cost model of such networks which described the energy consumed by a transfer for all the crossed equipments. This model is then used to develop a new energy-aware framework adapted to Bulk Data Transfers over dedicated networks. This framework enables switching off unused network portions during certain periods of time to save energy. This framework is also endowed with prediction algorithms to avoid useless switching off and with adaptive scheduling management to optimize the energy used by the transfers. 1 Introductio

    On the energy footprint of I/O management in Exascale HPC systems

    Get PDF
    International audienceThe advent of unprecedentedly scalable yet energy hungry Exascale supercomputers poses a major challenge in sustaining a high performance-per-watt ratio. With I/O management acquiring a crucial role in supporting scientific simulations, various I/O management approaches have been proposed to achieve high performance and scalability. However, the details of how these approaches affect energy consumption have not been studied yet. Therefore, this paper aims to explore how much energy a supercomputer consumes while running scientific simulations when adopting various I/O management approaches. In particular, we closely examine three radically different I/O schemes including time partitioning, dedicated cores, and dedicated nodes. To do so, we implement the three approaches within the Damaris I/O middleware and perform extensive experiments with one of the target HPC applications of the Blue Waters sustained-petaflop supercomputer project: the CM1 atmospheric model. Our experimental results obtained on the French Grid’5000 platform highlight the differences among these three approaches and illustrate in which way various configurations of the application and of the system can impact performance and energy consumption. Moreover, we propose and validate a mathematical model that estimates the energy consumption of a HPC simulation under different I/O approaches. Our proposed model gives hints to pre-select the most energy-efficient I/O approach for a particular simulation on a particular HPC system and therefore provides a step towards energy-efficient HPC simulations in Exascale systems. To the best of our knowledge, our work provides the first in-depth look into the energy-performance tradeoffs of I/O management approaches

    Energy-Aware Massively Distributed Cloud Facilities: The DISCOVERY Initiative

    Get PDF
    International audienceInstead of the current trend consisting of building larger and larger data centers (DCs) in few strategic locations, the DISCOVERY initiative proposes to leverage any network point of presences (PoP, i.e., a small or medium-sized network center) available through the Internet. The key idea is to demonstrate a widely distributed Cloud platform that can better match the geographical dispersal of users and of renewable energy sources. This involves radical changes in the way resources are managed, but leveraging computing resources around the end-users will enable to deliver a new generation of highly efficient and sustainable Utility Computing (UC) platforms, thus providing a strong alternative to the actual Cloud model based on mega DCs (i.e., DCs composed of tens of thousands resources). This poster will present the DISCOVERY initiative efforts towards achieving energy-aware massively distributed cloud facilities. To satisfy the escalating demand for Cloud Computing (CC) resources while realizing economy of scale, the production of computing resources is concentrated in mega data centers (DCs) of ever-increasing size, where the number of physical resources that one DC can host is limited by the capacity of its energy supply and its cooling system. To meet these critical needs in terms of energy supply and cooling, the current trend is toward building DCs in regions with abundant and affordable electricity supplies or in regions close to the polar circle to leverage free cooling techniques [1]. However, concentrating Mega-DCs in only few attractive places implies different issues. First, a disaster in these areas would be dramatic for IT services the DCs host as the con-nectivity to CC resources would not be guaranteed. Second, in addition to jurisdiction concerns, hosting computing resources in a few locations leads to useless network overheads to reach each DC. Such overheads can prevent the adoption of the UC paradigm by several kinds of applications such as mobile computing or big data ones

    Experimental analysis of vectorized instructions impact on energy and power consumption under thermal design power constraints

    Get PDF
    International audienceVectorized instructions were introduced to improve the performance of applications. However, they come with an increase in the power consumption cost. As a consequence, processors are designed to limit the frequency of the processors when such instructions are used in order to maintain the thermal design power.In this paper, we study and compare the impact of thermal design power and SIMD instructions on performance, power and energy consumption of processors and memory. The study is performed on three different architectures providing different characteristics and four applications with different profiles (including one application with different phases, each phase having a different profile).The study shows that, because of processor frequency, performance and power consumption are strongly related under thermal design power. It also shows that AVX512 has unexpected behavior regarding processor power consumption, while DRAM power consumption is impacted by SIMD instructions because of the generated memory throughput

    [Decision process in oncology: the importance of multidisciplinary meeting]

    Get PDF
    International audienceMultidisciplinary meeting (MDM) in oncology has been institutionalised in France by the Cancer Plan. This study aims to determine the place of MDM in the decision process. From November 2004 to July 2005, we observed 29 meetings at the Tours Hospital and 324 case presentations, 80 in orthopaedics, 151 in gastroenterology and 93 in chest medicine. Forty physicians attending the meetings answered a questionnaire exploring their opinions on MDM and the collegial decision. We found that MDM is mostly the place for technical discussions and that patients' wishes are rarely addressed. The different medical specialities are well represented but we observed that only physicians attend MDM. Decisions for straightforward cases are rapidly validated. For more complex clinical situations (25 to 40% of case presentations), the multidisciplinary approach allows to adapt guidelines or to choose alternative treatments. All the physicians interviewed express that MDM legitimates the medical decision. It occurs that they disagree with the RCP decision. We discuss how MDM impacts on the medical decision as well as the shift from the individual decision to the collective one, particularly in term of responsibility

    Opportunistic Scheduling in Clouds Partially Powered by Green Energy

    Get PDF
    International audienceThe fast growth of demand for computing and storage resources in data centers has considerably increased their energy consumption. Improving the utilization of data center resources and integrating renewable energy, such as solar and wind, has been proposed to reduce both the brown energy consumption and carbon footprint of the data centers. In this paper, we propose a novel framework oPportunistic schedulIng broKer infrAstructure (PIKA) to save energy in small mono-site data centers. In order to reduce the brown energy consumption, PIKA integrates resource overcommit techniques that help to minimize the number of powered-on Physical Machines (PMs). On the other hand, PIKA dynamically schedules the jobs and adjusts the number of powered-on PMs to match the variable renewable energy supply. Our simulations with a real-world job workload and solar power traces demonstrate that PIKA saves brown energy consumption by up to 44.9% compared to a typical scheduling algorithm
    • 

    corecore