5 research outputs found

    Latency, Energy and Carbon Aware Collaborative Resource Allocation with Consolidation and QoS Degradation Strategies in Edge Computing

    No full text
    Outstanding Paper AwardInternational audienceEdge Computing has emerged from the Cloud to tackle the increasingly stringent latency, reliability and scalability imperatives of modern applications, mainly in the Internet of Things arena. To this end, the data centers are pushed to the edge of the network to diversify and bring the services closer to the users. This spatial distribution offer a wide range of opportunities for allowing self-consumption from local renewable energy sources with regard to the local weather conditions. However, scheduling the users' tasks so as to meet the service restrictions while consuming the most renewable energy and reducing the carbon footprint remains a challenge. In this paper, we design a nationwide Edge infrastructure, and study its behavior under three typical electrical configurations including solar power plant, batteries and the grid. Then, we study a set of techniques that collaboratively allocates resources on the edge data centers to harvest renewable energy and reduce the environmental impact. These strategies also includes energy efficiency optimization by means of reasonable quality of service degradation and consolidation techniques at each data center in order to reduce the need for brown energy. The simulation results show that combining these techniques allows to increase the self-consumption of the platform by 7.83% and to reduce the carbon footprint by 35.7% compared to the baseline algorithm. The optimizations also outperform classical energy-aware resource management algorithms from the literature. Yet, these techniques do not equally contribute to these performances, consolidation being the most efficient

    Latency, Energy and Carbon Aware Collaborative Resource Allocation with Consolidation and QoS Degradation Strategies in Edge Computing

    No full text
    International audienceEdge Computing has emerged from the Cloud to tackle the increasingly stringent latency, reliability and scalability imperatives of modern applications, mainly in the Internet of Things arena. To this end, the data centers are pushed to the edge of the network to diversify and bring the services closer to the users. This spatial distribution offer a wide range of opportunities for allowing self-consumption from local renewable energy sources with regard to the local weather conditions. However, scheduling the users' tasks so as to meet the service restrictions while consuming the most renewable energy and reducing the carbon footprint remains a challenge. In this paper, we design a nationwide Edge infrastructure, and study its behavior under three typical electrical configurations including solar power plant, batteries and the grid. Then, we study a set of techniques that collaboratively allocates resources on the edge data centers to harvest renewable energy and reduce the environmental impact. These strategies also includes energy efficiency optimization by means of reasonable quality of service degradation and consolidation techniques at each data center in order to reduce the need for brown energy. The simulation results show that combining these techniques allows to increase the self-consumption of the platform by 7.83% and to reduce the carbon footprint by 35.7% compared to the baseline algorithm. The optimizations also outperform classical energy-aware resource management algorithms from the literature. Yet, these techniques do not equally contribute to these performances, consolidation being the most efficient

    Latency, Energy and Carbon Aware Collaborative Resource Allocation with Consolidation and QoS Degradation Strategies in Edge Computing

    No full text
    International audienceEdge Computing has emerged from the Cloud to tackle the increasingly stringent latency, reliability and scalability imperatives of modern applications, mainly in the Internet of Things arena. To this end, the data centers are pushed to the edge of the network to diversify and bring the services closer to the users. This spatial distribution offer a wide range of opportunities for allowing self-consumption from local renewable energy sources with regard to the local weather conditions. However, scheduling the users' tasks so as to meet the service restrictions while consuming the most renewable energy and reducing the carbon footprint remains a challenge. In this paper, we design a nationwide Edge infrastructure, and study its behavior under three typical electrical configurations including solar power plant, batteries and the grid. Then, we study a set of techniques that collaboratively allocates resources on the edge data centers to harvest renewable energy and reduce the environmental impact. These strategies also includes energy efficiency optimization by means of reasonable quality of service degradation and consolidation techniques at each data center in order to reduce the need for brown energy. The simulation results show that combining these techniques allows to increase the self-consumption of the platform by 7.83% and to reduce the carbon footprint by 35.7% compared to the baseline algorithm. The optimizations also outperform classical energy-aware resource management algorithms from the literature. Yet, these techniques do not equally contribute to these performances, consolidation being the most efficient

    Renewable Energy in Data Centers: the Dilemma of Electrical Grid Dependency and Autonomy Costs

    No full text
    International audienceIntegrating larger shares of renewables in data centers' electrical mix is mandatory to reduce their carbon footprint. However, as they are intermittent and fluctuating, renewable energies alone cannot provide a 24/7 supply and should be combined with a secondary source. Finding the optimal infrastructure configuration for both renewable production and financial costs remains difficult. In this paper, we examine three scenarios with on-site renewable energy sources combined respectively with the electrical grid, batteries alone and batteries with hydrogen storage systems. The objectives are first, to size optimally the electric infrastructure using combinations of standard microgrids approaches, secondly to quantify the level of grid utilization when data centers consume/ export electricity from/to the grid, to determine the level of effort required from the grid operator, and finally to analyze the cost of 100% autonomy provided by the battery-based configurations and to discuss their economical viability. Our results show that in the grid-dependent mode, 63.1% of the generated electricity has to be injected into the grid and retrieved later. In the autonomous configurations, the cheapest one including hydrogen storage leads to a unit cost significantly more expensive than the electricity supplied from a national power system in many countries
    corecore