6 research outputs found

    Energy-efficient and thermal-aware resource management for heterogeneous datacenters

    Get PDF
    International audienceWe propose in this paper to study the energy-, thermal- and performance-aware resource management in heterogeneous datacenters. Witnessing the continuous development of heterogeneity in datacenters, we are confronted with their different behaviors in terms of performance, power consumption and thermal dissipation: indeed, heterogeneity at server level lies both in the computing infrastructure (computing power, electrical power consumption) and in the heat removal systems (different enclosure, fans, thermal sinks). Also the physical locations of the servers become important with heterogeneity since some servers can (over)heat others. While many studies address independently these parameters (most of the time performance and power or energy), we show in this paper the necessity to tackle all these aspects for an optimal resource management of the computing resources. This leads to improved energy usage in a heterogeneous datacenter including the cooling of the computer rooms. We build our approach on the concept of heat distribution matrix to handle the mutual influence of the servers, in heterogeneous environments, which is novel in this context. We propose a heuristic to solve the server placement problem and we design a generic greedy framework for the online scheduling problem. We derive several single-objective heuristics (for performance, energy, cooling) and a novel fuzzy-based priority mechanism to handle their tradeoffs. Finally, we show results using extensive simulations fed with actual measurements on heterogeneous servers

    Optimized Thermal-Aware Job Scheduling and Control of Data Centers

    Get PDF
    Analyzing data centers with thermal-aware optimization techniques is a viable approach to reduce energy consumption of data centers. By taking into account thermal consequences of job placements among the servers of a data center, it is possible to reduce the amount of cooling necessary to keep the servers below a given safe temperature threshold. We set up an optimization problem to analyze and characterize the optimal set points for the workload distribution and the supply temperature of the cooling equipment. Furthermore, under mild assumptions, we design and analyze controllers that regulate the system to the optimal state without knowledge of the current total workload to be handled by the data center. The response of our controller is validated by simulations and convergence to the optimal set points is achieved under varying workload conditions

    La gestion des ressources pour des infrastructures vertes par la reconfiguration

    Get PDF
    Cette HDR présente des travaux dans le contexte des systèmes informatiques à grande échelle pouvant être des grilles de calcul ou le cloud computing. Partant de deux constats : la consommation énergétique de ces systèmes est trop importante et ces systèmes sont de plus en complexes, ce mémoire se propose de répondre à la problématique suivante : comment gérer de manière optimale les ressources afin d'obtenir des infrastructures matérielles et logicielles "vertes" c'est à dire efficaces en énergie ? Ces travaux proposent trois axes de recherche : le premier en considérant le système complet et les leviers verts associés, le deuxième en étudiant des politiques d'allocation de ressources avec contraintes d'énergie et de chaleur, le troisième en étudiant des reconfigurations autonomiques d'applications. Pour finir, une description d'un centre de décision autonome pour des infrastructures vertes est proposée

    Modeling the power consumption of computing systems and applications through machine learning techniques

    Get PDF
    Au cours des dernières années, le nombre de systèmes informatiques n'a pas cesser d'augmenter. Les centres de données sont peu à peu devenus des équipements hautement demandés et font partie des plus consommateurs en énergie. L'utilisation des centres de données se partage entre le calcul intensif et les services web, aussi appelés informatique en nuage. La rapidité de calcul est primordiale pour le calcul intensif, mais pour les autres services ce paramètre peut varier selon les accords signés sur la qualité de service. Certains centres de données sont dits hybrides car ils combinent plusieurs types de services. Toutes ces infrastructures sont extrêmement énergivores. Dans ce présent manuscrit nous étudions les modèles de consommation énergétiques des systèmes informatiques. De tels modèles permettent une meilleure compréhension des serveurs informatiques et de leur façon de consommer l'énergie. Ils représentent donc un premier pas vers une meilleure gestion de ces systèmes, que ce soit pour faire des économies d'énergie ou pour facturer l'électricité à la charge des utilisateurs finaux. Les politiques de gestion et de contrôle de l'énergie comportent de nombreuses limites. En effet, la plupart des algorithmes d'ordonnancement sensibles à l'énergie utilisent des modèles de consommation restreints qui renferment un certain nombre de problèmes ouverts. De précédents travaux dans le domaine suggèrent d'utiliser les informations de contrôle fournies par le système informatique lui-même pour surveiller la consommation énergétique des applications. Néanmoins, ces modèles sont soit trop dépendants du type d'application, soit manquent de précision. Ce manuscrit présente des techniques permettant d'améliorer la précision des modèles de puissance en abordant des problèmes à plusieurs niveaux: depuis l'acquisition des mesures de puissance jusqu'à la définition d'une charge de travail générique permettant de créer un modèle lui aussi générique, c'est-à-dire qui pourra être utilisé pour des charges de travail hétérogènes. Pour atteindre un tel but, nous proposons d'utiliser des techniques d'apprentissage automatique.Les modèles d'apprentissage automatique sont facilement adaptables à l'architecture et sont le cœur de cette recherche. Ces travaux évaluent l'utilisation des réseaux de neurones artificiels et la régression linéaire comme technique d'apprentissage automatique pour faire de la modélisation statistique non linéaire. De tels modèles sont créés par une approche orientée données afin de pouvoir adapter les paramètres en fonction des informations collectées pendant l'exécution de charges de travail synthétiques. L'utilisation des techniques d'apprentissage automatique a pour but d'atteindre des estimateurs de très haute précision à la fois au niveau application et au niveau système. La méthodologie proposée est indépendante de l'architecture cible et peut facilement être reproductible quel que soit l'environnement. Les résultats montrent que l'utilisation de réseaux de neurones artificiels permet de créer des estimations très précises. Cependant, en raison de contraintes de modélisation, cette technique n'est pas applicable au niveau processus. Pour ce dernier, des modèles prédéfinis doivent être calibrés afin d'atteindre de bons résultats.The number of computing systems is continuously increasing during the last years. The popularity of data centers turned them into one of the most power demanding facilities. The use of data centers is divided into high performance computing (HPC) and Internet services, or Clouds. Computing speed is crucial in HPC environments, while on Cloud systems it may vary according to their service-level agreements. Some data centers even propose hybrid environments, all of them are energy hungry. The present work is a study on power models for computing systems. These models allow a better understanding of the energy consumption of computers, and can be used as a first step towards better monitoring and management policies of such systems either to enhance their energy savings, or to account the energy to charge end-users. Energy management and control policies are subject to many limitations. Most energy-aware scheduling algorithms use restricted power models which have a number of open problems. Previous works in power modeling of computing systems proposed the use of system information to monitor the power consumption of applications. However, these models are either too specific for a given kind of application, or they lack of accuracy. This report presents techniques to enhance the accuracy of power models by tackling the issues since the measurements acquisition until the definition of a generic workload to enable the creation of a generic model, i.e. a model that can be used for heterogeneous workloads. To achieve such models, the use of machine learning techniques is proposed. Machine learning models are architecture adaptive and are used as the core of this research. More specifically, this work evaluates the use of artificial neural networks (ANN) and linear regression (LR) as machine learning techniques to perform non-linear statistical modeling.Such models are created through a data-driven approach, enabling adaptation of their parameters based on the information collected while running synthetic workloads. The use of machine learning techniques intends to achieve high accuracy application- and system-level estimators. The proposed methodology is architecture independent and can be easily reproduced in new environments.The results show that the use of artificial neural networks enables the creation of high accurate estimators. However, it cannot be applied at the process-level due to modeling constraints. For such case, predefined models can be calibrated to achieve fair results.% The use of process-level models enables the estimation of virtual machines' power consumption that can be used for Cloud provisioning

    Energy Efficiency in Data Centres and the Barriers to Further Improvements: An Interdisciplinary Investigation

    Get PDF
    Creation, storage and sharing of data throughout the world is rapidly increasing alongside rising demands for access to the internet, communications and digital services, leading to increasing levels of energy consumption in data centres. Steps have already been taken towards lower energy consumption, however there is still some way to go. To gain a better understanding of what barriers there are to further energy saving, a cross-section of industry representatives were interviewed. Generally, it was found that efforts are being made to reduce energy consumption, albeit to varying degrees. Those interviewed face various problems when attempting to improve their energy consumption including financial difficulties, lack of communication, tenant/landlord type relationships and physical restrictions. The findings show that the data centre industry would benefit from better access to information such as which technologies or management methods to invest in and how other facilities have reduced energy, along with a greater knowledge of the problem of energy consumption. Metrics commonly used in the industry are not necessarily helping facilities to reach higher levels of energy efficiency, and are not suited to their purpose. A case study was conducted to critically assess the Power Utilisation Effectiveness (PUE) metric, the most commonly used metric, through using open source information. The work highlights the fact that whilst the metric is valuable to the industry in terms of creating awareness and competition between companies regarding energy use, it does not give a complete representation of energy efficiency. Crucially the metric also does not consider the energy use of the server, which forms the functional component of the data centre. By taking a closer look at the fans within a server and by focussing on this hidden parameter within the PUE measurement, experimental work in this thesis has also considered one technological way in which a data centre may save energy. Barriers such as those found in the interviews may however restrict such potential energy saving interventions. Overall, this thesis has provided evidence of barriers that may be preventing further energy savings in data centres and provided recommendations for improvement. The industry would benefit from a change in the way that metrics are employed to assess energy efficiency, and new tools to encourage better choices of which technologies and methodologies to employ. The PUE metric is useful to assess supporting infrastructure energy use during design and operation. However when assessing overall impacts of IT energy use, businesses need more indicators such as life cycle carbon emissions to be integrated into the overall energy assessment
    corecore