21 research outputs found

    Smart Datacenter Electrical Load Model for Renewable Sources Management.

    Get PDF
    Nowadays, datacenters are one of the most consuming devices due to the increase in cloud, web-services and high performance computing demands all over the world. To be clean and without connection to the grid, datacenters projects tempt to feed electricity with renewable energy sources and storage elements. This power production needs an energy management providing power envelopes as a constraint to the datacenter management system. This paper presents an optimization module that optimizes the IT load under renewable energy constraints and outputs the power consumed by the computing resources of a datacenter. We are able to obtain a reduction of up to 73% in the tasks violations while respecting a given power envelop

    IDAC: A Sensor-Based Model for Presence Control and Idleness Detection in Brazilian Companies

    No full text
    This article proposes a new model named IDAC for idleness detection and automatic clocking in Brazilian companies. Based on the studies and the gaps identified in related work, we highlight the model features and how it interacts with sensors, providing idleness detection based on the historical movement of the employees. We developed a prototype that was evaluated through simulation, taking into account the architectural plant and the employees behavior of five real Brazilian companies. The results reveals the benefits of using IDAC both at the owner (control and productivity) and employees (the clocking actions occurs automatically) levels

    Análise de desempenho e consumo energético de um cluster baseado em computadores de placa única

    Get PDF
    A corrida dos supercomputadores para obtenção de maior desempenho tem desconsiderado o alto consumo energético destas máquinas. O consumo dos computadores de alto desempenho (HPC) em funcionamento atualmente ultrapassa a marca de 200 bilhões de quilowatts hora (kWh), tornando-se o recurso mais caro em um HPC. Para atingir o Exascale é necessário o uso de tecnologias diferentes, focadas na economia de energia, uma vez que o relatório do DARPA recomenda que o consumo destes novos sistemas não ultrapasse 20 MW. Este trabalho tem como objetivo analisar o desempenho e consumo energético das placas de desenvolvimento BeagleBone Black, que utilizam processadores ARM, focados no baixo consumo energético, uma das apostas da computação de alto desempenho do futuro. Para realização dos testes de desempenho e medição do consumo, foi desenvolvido um cluster homogêneo utilizando 10 placas BeagleBone Black, com sistema operacional Linux, comunicando-se através de troca de mensagens com MPI e uma placa de medição de consumo. Os resultados obtidos apontam um desempenho de 27,33 Mflops/Watt, e uma economia de energia até 85,91% quando os nós ociosos são desligados

    Ordonnancement dans un centre de calculs alimenté par des sources d'énergie renouvelables sans connexion au réseau avec une charge de travail mixte basée sur des phases

    Get PDF
    Due to the increase of cloud, web-services and high performance computing demands all over the world, datacenters are now known to be one of the biggest actors when talking about energy consumption. In 2006 alone, datacenters were responsible for consuming 61.4 billion kWh in the United States. When looking at the global scenario, datacenters are currently consuming more energy than the entire United Kingdom, representing about 1.3\% of world's electricity consumption, and being even called the factories of the digital age. Supplying datacenters with clean-to-use renewable energy is therefore essential to help mitigate climate change. The vast majority of cloud provider companies that claim to use green energy supply on their datacenters consider the classical grid, and deploy the solar panels/wind turbines somewhere else and sell the energy to electricity companies, which incurs in energy losses when the electricity travels throughout the grid. Even though several efforts have been conducted at the computing level in datacenters partially powered by renewable energy sources, the scheduling considering on site renewable energy sources and its variations, without connection to the grid can still be widely explored. Since energy efficiency in datacenters is directly related to the resource consumption of the computing nodes, performance optimization and an efficient load scheduling are essential for energy saving. Today, we observe the use of cloud computing as the basis of datacenters, either in a public or private fashion. The main particularity of our approach is that we consider a power envelope composed only by renewable energy as a constraint, hence with a variable amount of power available at each moment. The scheduling under this kind of constraint becomes more complex: without further checks, we are not ensured that a running task will run until completion. We start by addressing the IT load scheduling of batch tasks, which are characterized by their release time, due date and resource demand, in a cloud datacenter while respecting the aforementioned power envelope. The data utilized for the batch tasks comes from datacenter traces, containing CPU, memory and network values. The power envelopes considered, represent an estimation which would be provided by a power decision module and is the expected power production based on weather forecasts. The aim is to maximize the Quality of Service with a variable constraint on electrical power. Furthermore, we explore a workload composed by batch and services, where the resources consumption varies over time. The traces utilized for the service tasks originate from business critical datacenter. In this case we rely on the concept of phases, where each significant resource change in the resources consumption constitutes a new phase of the given task. In this task model phases could also receive less resources than requested. The reduction of resources can impact the QoS and consequently the datacenter profit. In this approach we also include the concept of cross-correlation to evaluate where to place a task under a power curve, and what is the best node to place tasks together (i.e. sharing resources). Finally, considering the previous workload of batch tasks and services, we present an approach towards handling unexpected events in the datacenter. More specifically we focus on IT related events such as tasks arriving at any given time, demanding more or less resources than expected, or having a different finish time than what was initially expected. We adapt the proposed algorithms to take actions depending on which event occurs, e.g. task degradation to reduce the impact on the datacenter profit.Les centres de données sont reconnus pour être l'un des principaux acteurs en matière de consommation d'énergie du fait de l'augmentation de l'utilisation du cloud, des services web et des applications de calcul haute performance dans le monde entier. En 2006, les centres de données ont consommé 61,4 milliards de kWh aux états-Unis. Au niveau mondial, les centres de données consomment actuellement plus d'énergie que l'ensemble du Royaume-Uni, c'est-à-dire environ 1,3% de la consommation électrique mondiale, et ils sont de fait appelés les usines de l'ère numérique. Un des moyens d'atténuer le changement climatique est d'alimenter les centres de données en énergie renouvelable (énergie propre). La grande majorité des fournisseurs de cloud computing qui prétendent alimenter leurs centres de données en énergie verte sont en fait connectés au réseau classique et déploient des panneaux solaires et des éoliennes ailleurs puis vendent l'électricité produite aux compagnies d'électricité. Cette approche entraîne des pertes d'énergie lorsque l'électricité traverse le réseau. Même si différents efforts ont été réalisés au niveau informatique dans les centres de données partiellement alimentés par des énergies renouvelables, des améliorations sont encore possibles notamment concernant l'ordonnancement prenant en compte les sources d'énergie renouvelables sur site sans connexion au réseau et leur intermittence. C'est le but du projet ANR DataZERO, dans le cadre duquel cette thèse a été réalisée. L'efficacité énergétique dans les centres de données étant directement liée à la consommation de ressources d'un nœud de calcul, l'optimisation des performances et un ordonnancement efficace des calculs sont essentiels pour économiser l'énergie. La spécificité principale de notre approche est de placer le centre de données sous une contrainte de puissance, provenant entièrement d'énergies renouvelables : la puissance disponible peut ainsi varier au cours du temps. L'ordonnancement de tâches sous ce genre de contrainte rend le problème plus difficile, puisqu'on doit notamment s'assurer qu'une tâche qui commence aura assez d'énergie pour aller jusqu'à son terme. Dans cette thèse, nous commençons par proposer une planification de tâches de type "batch" qui se caractérisent par leur instant d'arrivée, leur date d'échéance et leurs demandes de ressources tout en respectant une contrainte de puissance. Les données utilisées pour les tâches de type batch viennent de traces de centres de données et contiennent des mesures de consommation CPU, mémoire et réseau. Quant aux enveloppes de puissance considérées, elles représentent ce que pourrait fournir un module de décision électrique, c'est-à-dire la production d'énergie prévue (énergie renouvelable seulement) basée sur les prévisions météorologiques. L'objectif est de maximiser la Qualité de Service avec une contrainte sur la puissance électrique. Par la suite, nous examinons une charge de travail composée de tâches de type "batch" et de services, où la consommation des ressources varie au cours du temps. Les tracecs utilisées pour les services proviennent d'une centre de données à "business critique". Dans ce cadre, nous envisageons le concpet de phases, dans lequel les changements significatifs de consommation de resources à l'intérieur d'une même tâche marquent le début d'une nouvelle phase. Nous considérons également un modèle de tâches pouvant recevoir moins de ressources que demandées. Nous étudions l'impact de ce modèle sur le profit du centre de données pour chaque type de tâche. Nous intégrons aussi le concept de "corrélation croisée" pour évaluer où placer une tâche selon une courbe de puissance afin de trouver le meilleur nœud pour placer plusieurs tâches (c.-à-d. Partager les ressources)

    Scheduling in cloud data center powered by renewable energy only with mixed phases-based workload

    No full text
    Les centres de données sont reconnus pour être l'un des principaux acteurs en matière de consommation d'énergie du fait de l'augmentation de l'utilisation du cloud, des services web et des applications de calcul haute performance dans le monde entier. En 2006, les centres de données ont consommé 61,4 milliards de kWh aux états-Unis. Au niveau mondial, les centres de données consomment actuellement plus d'énergie que l'ensemble du Royaume-Uni, c'est-à-dire environ 1,3% de la consommation électrique mondiale, et ils sont de fait appelés les usines de l'ère numérique. Un des moyens d'atténuer le changement climatique est d'alimenter les centres de données en énergie renouvelable (énergie propre). La grande majorité des fournisseurs de cloud computing qui prétendent alimenter leurs centres de données en énergie verte sont en fait connectés au réseau classique et déploient des panneaux solaires et des éoliennes ailleurs puis vendent l'électricité produite aux compagnies d'électricité. Cette approche entraîne des pertes d'énergie lorsque l'électricité traverse le réseau. Même si différents efforts ont été réalisés au niveau informatique dans les centres de données partiellement alimentés par des énergies renouvelables, des améliorations sont encore possibles notamment concernant l'ordonnancement prenant en compte les sources d'énergie renouvelables sur site sans connexion au réseau et leur intermittence. C'est le but du projet ANR DataZERO, dans le cadre duquel cette thèse a été réalisée. L'efficacité énergétique dans les centres de données étant directement liée à la consommation de ressources d'un nœud de calcul, l'optimisation des performances et un ordonnancement efficace des calculs sont essentiels pour économiser l'énergie. La spécificité principale de notre approche est de placer le centre de données sous une contrainte de puissance, provenant entièrement d'énergies renouvelables : la puissance disponible peut ainsi varier au cours du temps. L'ordonnancement de tâches sous ce genre de contrainte rend le problème plus difficile, puisqu'on doit notamment s'assurer qu'une tâche qui commence aura assez d'énergie pour aller jusqu'à son terme. Dans cette thèse, nous commençons par proposer une planification de tâches de type "batch" qui se caractérisent par leur instant d'arrivée, leur date d'échéance et leurs demandes de ressources tout en respectant une contrainte de puissance. Les données utilisées pour les tâches de type batch viennent de traces de centres de données et contiennent des mesures de consommation CPU, mémoire et réseau. Quant aux enveloppes de puissance considérées, elles représentent ce que pourrait fournir un module de décision électrique, c'est-à-dire la production d'énergie prévue (énergie renouvelable seulement) basée sur les prévisions météorologiques. L'objectif est de maximiser la Qualité de Service avec une contrainte sur la puissance électrique. Par la suite, nous examinons une charge de travail composée de tâches de type "batch" et de services, où la consommation des ressources varie au cours du temps. Les tracecs utilisées pour les services proviennent d'une centre de données à "business critique". Dans ce cadre, nous envisageons le concpet de phases, dans lequel les changements significatifs de consommation de resources à l'intérieur d'une même tâche marquent le début d'une nouvelle phase. Nous considérons également un modèle de tâches pouvant recevoir moins de ressources que demandées. Nous étudions l'impact de ce modèle sur le profit du centre de données pour chaque type de tâche. Nous intégrons aussi le concept de "corrélation croisée" pour évaluer où placer une tâche selon une courbe de puissance afin de trouver le meilleur nœud pour placer plusieurs tâches (c.-à-d. Partager les ressources).Due to the increase of cloud, web-services and high performance computing demands all over the world, datacenters are now known to be one of the biggest actors when talking about energy consumption. In 2006 alone, datacenters were responsible for consuming 61.4 billion kWh in the United States. When looking at the global scenario, datacenters are currently consuming more energy than the entire United Kingdom, representing about 1.3\% of world's electricity consumption, and being even called the factories of the digital age. Supplying datacenters with clean-to-use renewable energy is therefore essential to help mitigate climate change. The vast majority of cloud provider companies that claim to use green energy supply on their datacenters consider the classical grid, and deploy the solar panels/wind turbines somewhere else and sell the energy to electricity companies, which incurs in energy losses when the electricity travels throughout the grid. Even though several efforts have been conducted at the computing level in datacenters partially powered by renewable energy sources, the scheduling considering on site renewable energy sources and its variations, without connection to the grid can still be widely explored. Since energy efficiency in datacenters is directly related to the resource consumption of the computing nodes, performance optimization and an efficient load scheduling are essential for energy saving. Today, we observe the use of cloud computing as the basis of datacenters, either in a public or private fashion. The main particularity of our approach is that we consider a power envelope composed only by renewable energy as a constraint, hence with a variable amount of power available at each moment. The scheduling under this kind of constraint becomes more complex: without further checks, we are not ensured that a running task will run until completion. We start by addressing the IT load scheduling of batch tasks, which are characterized by their release time, due date and resource demand, in a cloud datacenter while respecting the aforementioned power envelope. The data utilized for the batch tasks comes from datacenter traces, containing CPU, memory and network values. The power envelopes considered, represent an estimation which would be provided by a power decision module and is the expected power production based on weather forecasts. The aim is to maximize the Quality of Service with a variable constraint on electrical power. Furthermore, we explore a workload composed by batch and services, where the resources consumption varies over time. The traces utilized for the service tasks originate from business critical datacenter. In this case we rely on the concept of phases, where each significant resource change in the resources consumption constitutes a new phase of the given task. In this task model phases could also receive less resources than requested. The reduction of resources can impact the QoS and consequently the datacenter profit. In this approach we also include the concept of cross-correlation to evaluate where to place a task under a power curve, and what is the best node to place tasks together (i.e. sharing resources). Finally, considering the previous workload of batch tasks and services, we present an approach towards handling unexpected events in the datacenter. More specifically we focus on IT related events such as tasks arriving at any given time, demanding more or less resources than expected, or having a different finish time than what was initially expected. We adapt the proposed algorithms to take actions depending on which event occurs, e.g. task degradation to reduce the impact on the datacenter profit

    Mutida: A Rights Management Protocol for Distributed Storage Systems Without Fully Trusted Nodes

    No full text
    Several distributed storage solutions that do not rely on a central server have been proposed over the last few years. Most of them are deployed on public networks on the internet. However, these solutions often do not provide a mechanism for access rights to enable the users to control who can access a specific file or piece of data. In this article, we propose Mutida (from the Latin word “Aditum” meaning “access”), a protocol that allows the owner of a file to delegate access rights to another user. This access right can then be delegated to a computing node to process the piece of data. The mechanism relies on the encryption of the data, public key/value pair storage to register the access control list and on a function executed locally by the nodes to compute the decryption key. After presenting the mechanism, its advantages and limitations, we show that the proposed mechanism has similar functionalities to Wave, an authorization framework with transitive delegation. However, Wave does not require fully trusted nodes. We implement our approach in a Java software program and evaluate it on the Grid’5000 testbed. We compare our approach to an approach based on a protocol relying on Shamir key reconstruction, which provides similar feature

    Mutida: A Rights Management Protocol for Distributed Storage Systems Without Fully Trusted Nodes

    No full text
    International audienceSeveral distributed storage solutions that do not rely on a central server have been proposed over the last few years. Most of them are deployed on public networks on the internet. However, these solutions often do not provide a mechanism for access rights to enable the users to control who can access a specific file or piece of data. In this article, we propose Mutida (from the Latin word "Aditum" meaning "access"), a protocol that allows the owner of a file to delegate access rights to another user. This access right can then be delegated to a computing node to process the piece of data. The mechanism relies on the encryption of the data, public key/value pair storage to register the access control list and on a function executed locally by the nodes to compute the decryption key. After presenting the mechanism, its advantages and limitations, we show that the proposed mechanism has similar functionalities to Wave, an authorization framework with transitive delegation. However, Wave does not require fully trusted nodes. We implement our approach in a Java software program and evaluate it on the Grid'5000 testbed. We compare our approach to an approach based on a protocol relying on Shamir key reconstruction, which provides similar features

    Phase-Based Tasks Scheduling in Data Centers Powered Exclusively by Renewable Energy

    No full text
    International audienceData centers are considered nowadays as the factories of the digital age, being currently responsible for consuming more energy than the entire United Kingdom. On the other side, the global total capacity of renewable power increases continuously. The combination of these two factors calls for new approaches in designing data centers powered only by renewable energy sources. Our work focuses on task scheduling optimization under a power envelope, and on the way to handle power starvation, i.e. when the available power does not provide sufficient resources to execute a given workload. To do so we utilize the concept of task degradation through cross-correlation to find where to place the tasks in order to reduce the data center profit degradation. The results show that our algorithm could obtain more than 34% increase in profit when compared to algorithms from the literature, while fulfilling the power profile and resources constraints
    corecore