12 research outputs found

    DTN based Management Framework for Green On/Off Networks

    Get PDF
    National audienceThe increasing cost of powering high performance networking infrastructure has led to the proposal of various energy saving schemes. The On/Off technique, being the most common energy saving scheme, consists of powering down partially or entirely a network infrastructure for energy saving purposes. Despite their capability to achieve great energy savings, On/Off networks experience high packet-loss rates due to the absence of reliability on packet delivery. Moreover, they cannot guarantee any response time to user applications. This paper presents the design and implementation of MFO2N; Experimental results show a correlation between offered quality of service and overall network power consumption, revealing that a trade-off should be made

    On Applying DTNs to a Delay Constrained Scenario in Wired Networks

    Get PDF
    International audienceThe Delay/Disruption Tolerant Networking (DTN) architecture has been successful in addressing communication issues such as disruption, variable delay, and network parti- tioning. DTN uses intermittently available links to communicate opportunistically regardless of delivery delay. In the literature, much work has been done mainly to improve the rate of message delivery and routing algorithms. However, previous work has not focused on guaranteeing the message delivery delay in a DTN scenario. In addition, real deployments of DTN systems have so far been mostly proof-of-concepts in research projects. We address the problem of delivery delay in a wired DTN scenario where messages are moved across a time-varying graph topology whose dynamics are known in advance and can be modified. We propose a framework that guarantees bounded delivery delay of users' data. To demonstrate the feasibility of our network management approach, we evaluate our framework on a 10-node wired DTN topology deployed on the Grid5000 platfor

    A User Friendly Phase Detection Methodology for HPC Systems' Analysis

    Get PDF
    International audienceA wide array of today's high performance computing (HPC) applications exhibits recurring behaviours or execution phases throughout their run-time. Accurate detection of program phases allows reconfiguring the system for a better power/performance trade off; and can reduce the simulation time of programs by identifying regions of code whose performance is critical to the entire program. Program phases are also reflected in different behaviours the system goes through or system phases, which can be used as an alternative means of program phase detection for users lacking expertise. In this paper, we present an execution vector based (EV-based) phase detection, which is an on-line methodology for detecting phases in the behaviour of a HPC system and determining execution points that correspond to these phases. We also present a methodology for defining a small set of EVs representative of the system's behaviour over a fixed period of time and show that EV-based phase detection identifies recurring phases. Our methodology is illustrated with benchmarks and a real life application

    Exploiting Performance Counters to Predict and Improve Energy Performance of HPC Systems

    Get PDF
    International audienceHardware monitoring through performance counters is available on almost all modern processors. Although these counters are originally designed for performance tuning, they have also been used for evaluating power consumption. We propose two approaches for modelling and understanding the behaviour of high performance computing (HPC) systems relying on hardware monitoring counters. We evaluate the effectiveness of our system modelling approach considering both optimising the energy usage of HPC systems and predicting HPC applications' energy consumption as target objectives. Although hardware monitoring counters are used for modelling the system, other methods -- including partial phase recognition and cross platform energy prediction -- are used for energy optimisation and prediction. Experimental results for energy prediction demonstrate that we can accurately predict the peak energy consumption of an application on a target platform; whereas, results for energy optimisation indicate that with no a priori knowledge of workloads sharing the platform we can save up to 24\% of the overall HPC system's energy consumption under benchmarks and real-life workloads

    A Runtime Framework for Energy Efficient HPC Systems Without a Priori Knowledge of Applications

    Get PDF
    International audienceThe rising computing demands of scientific endeavours often require the creation and management of High Performance Computing (HPC) systems for running experiments and processing vast amounts of data. These HPC systems generally operate at peak performance, consuming a large quantity of electricity, even though their workload varies over time. Understanding the behavioural patterns i.e., phases) of HPC systems during their use is key to adjust performance to resource demand and hence improve the energy efficiency. In this paper, we describe (i) a method to detect phases of an HPC system based on its workload, and (ii) a partial phase recognition technique that works cooperatively with on-the-fly dynamic management. We implement a prototype that guides the use of energy saving capabilities to demonstrate the benefits of our approach. Experimental results reveal the effectiveness of the phase detection method under real-life workload and benchmarks. A comparison with baseline unmanaged execution shows that the partial phase recognition technique saves up to 15% of energy with less than 1% performance degradation

    Beyond CPU Frequency Scaling for a Fine-grained Energy Control of HPC Systems

    Get PDF
    International audienceModern high performance computing subsystems (HPC) - including processor, network, memory, and IO - are provided with power management mechanisms. These include dynamic speed scaling and dynamic resource sleeping. Understanding the behavioral patterns of high performance computing systems at runtime can lead to a multitude of optimization opportunities including controlling and limiting their energy usage. In this paper, we present a general purpose methodology for optimizing energy performance of HPC systems consid- ering processor, disk and network. We rely on the concept of execution vector along with a partial phase recognition technique for on-the-fly dynamic management without any a priori knowledge of the workload. We demonstrate the effectiveness of our management policy under two real-life workloads. Experimental results show that our management policy in comparison with baseline unmanaged execution saves up to 24% of energy with less than 4% performance overhead for our real-life workloads

    DNA-inspired Scheme for Building the Energy Profile of HPC Systems

    Get PDF
    International audienceEnergy usage is becoming a challenge for the design of next generation large scale distributed systems. This paper explores an inno- vative approach of profiling such systems. It proposes a DNA-like solution without making any assumptions on the running applications and used hardware. This profiling based on internal counters usage and energy monitoring allows to isolate specific phases during the execution and enables some energy consumption control and energy usage prediction. First experimental validations of the system modeling are presented and analyzed

    Energy efficiency in HPC with and without knowledge of applications and services

    Get PDF
    International audienceThe constant demand of raw performance in high performance computing often leads to high performance systems' over-provisioning which in turn can result in a colossal energy waste due to workload/application variation over time. Proposing energy efficient solutions in the context of large scale HPC is a real unavoidable challenge. This paper explores two alternative approaches (with or without knowledge of applications and services) dealing with the same goal: reducing the energy usage of large scale infrastructures which support HPC applications. This article describes the first approach "with knowledge of applications and services'' which enables users to choose the less consuming implementation of services. Based on the energy consumption estimation of the different implementations (protocols) for each service, this approach is validated on the case of fault tolerance service in HPC. The approach "without knowledge'' allows some intelligent framework to observe the life of HPC systems and proposes some energy reduction schemes. This framework automatically estimates the energy consumption of the HPC system in order to apply power saving schemes. Both approaches are experimentally evaluated and analyzed in terms of energy efficiency

    Profilage système et leviers verts pour les infrastructures distribuées à grande échelle

    No full text
    Nowadays, reducing the energy consumption of large scale and distributed infrastructures has truly become a challenge for both industry and academia. This is corroborated by the many efforts aiming to reduce the energy consumption of those systems. Initiatives for reducing the energy consumption of large scale and distributed infrastructures can without loss of generality be broken into hardware and software initiatives.Unlike their hardware counterpart, software solutions to the energy reduction problem in large scale and distributed infrastructures hardly result in real deployments. At the one hand, this can be justified by the fact that they are application oriented. At the other hand, their failure can be attributed to their complex nature which often requires vast technical knowledge behind proposed solutions and/or thorough understanding of applications at hand. This restricts their use to a limited number of experts, because users usually lack adequate skills. In addition, although subsystems including the memory are becoming more and more power hungry, current software energy reduction techniques fail to take them into account. This thesis proposes a methodology for reducing the energy consumption of large scale and distributed infrastructures. Broken into three steps known as (i) phase identification, (ii) phase characterization, and (iii) phase identification and system reconfiguration; our methodology abstracts away from any individual applications as it focuses on the infrastructure, which it analyses the runtime behaviour and takes reconfiguration decisions accordingly.The proposed methodology is implemented and evaluated in high performance computing (HPC) clusters of varied sizes through a Multi-Resource Energy Efficient Framework (MREEF). MREEF implements the proposed energy reduction methodology so as to leave users with the choice of implementing their own system reconfiguration decisions depending on their needs. Experimental results show that our methodology reduces the energy consumption of the overall infrastructure of up to 24% with less than 7% performance degradation. By taking into account all subsystems, our experiments demonstrate that the energy reduction problem in large scale and distributed infrastructures can benefit from more than “the traditional” processor frequency scaling. Experiments in clusters of varied sizes demonstrate that MREEF and therefore our methodology can easily be extended to a large number of energy aware clusters. The extension of MREEF to virtualized environments like cloud shows that the proposed methodology goes beyond HPC systems and can be used in many other computing environments.De nos jours, réduire la consommation énergétique des infrastructures de calcul à grande échelle est devenu un véritable challenge aussi bien dans le monde académique qu’industriel. Ceci est justifié par les nombreux efforts visant à réduire la consommation énergétique de ceux-ci. Ces efforts peuvent sans nuire à la généralité être divisés en deux groupes : les approches matérielles et les approches logicielles.Contrairement aux approches matérielles, les approches logicielles connaissent très peu de succès à cause de leurs complexités. En effet, elles se focalisent sur les applications et requièrent souvent une très bonne compréhension des solutions proposées et/ou de l’application considérée. Ce fait restreint leur utilisation à un nombre limité d’experts puisqu’en général les utilisateurs n’ont pas les compétences nécessaires à leurs implémentation. Aussi, les solutions actuelles en plus de leurs complexités de déploiement ne prennent en compte que le processeur alors que les composants tel que la mémoire, le stockage et le réseau sont eux aussi de gros consommateurs d’énergie. Cette thèse propose une méthodologie de réduction de la consommation énergétique des infrastructures de calcul à grande échelle. Elaborée en trois étapes à savoir : (i) détection de phases, (ii) caractérisation de phases détectées et (iii) identification de phases et reconfiguration du système ; elle s’abstrait de toute application en se focalisant sur l’infrastructure dont elle analyse le comportement au cours de son fonctionnement afin de prendre des décisions de reconfiguration.La méthodologie proposée est implémentée et évaluée sur des grappes de calcul à haute performance de tailles variées par le biais de MREEF (Multi-Resource Energy Efficient Framework). MREEF implémente la méthodologie de réduction énergétique de manière à permettre aux utilisateurs d’implémenter leurs propres mécanismes de reconfiguration du système en fonction des besoins. Les résultats expérimentaux montrent que la méthodologie proposée réduit la consommation énergétique de 24% pour seulement une perte de performance de moins de 7%. Ils montrent aussi que pour réduire la consommation énergétique des systèmes, on peut s’appuyer sur les sous-systèmes tels que les sous-systèmes de stockage et de communication. Nos validations montrent que notre méthodologie s’étend facilement à un grand nombre de grappes de calcul sensibles à l’énergie (energy aware). L’extension de MREEF dans les environnements virtualisés tel que le cloud montre que la méthodologie proposée peut être utilisée dans beaucoup d’autres environnements de calcul

    Application-Agnostic Framework for Improving the Energy Efficiency of Multiple HPC Subsystems

    Get PDF
    International audienceThe subsystems that compose a HPC platform (e.g. CPU, memory, storage and network) are often designed and configured to deliver exceptional performance to a wide range of workloads. As a result, a large part of the power that these subsystems consume is dissipated as heat even when executing workloads that do not require maximum performance. Attempts to tackle this problem include technologies whereby operating systems and applications can reconfigure subsystems dynamically, such as by using DVFS for CPUs, LPI for network components, and variable disk spinning for HDDs. Most previous work has explored these technologies individually to optimise workload execution and reduce energy consumption. We propose a framework that performs on-line analysis of an HPC system in order to identify application execution patterns without a priori information of their workload. The framework takes advantage of reoccurring patterns to reconfigure multiple subsystems dynamically and reduce overall energy consumption. Performance evaluation was carried out on Grid'5000 considering both traditional HPC benchmarks and real-life applications
    corecore