167 research outputs found

    Thermal-aware cloud middleware to reduce cooling needs

    Get PDF
    International audienceAs we are living in a data-driven world and directed toward internet, the need for data enter is growing. The main limitation for building cloud infrastructures is their energy consumption. Moreover, their conception is not perfect because servers are not the ones consuming all the power, cooling systems are responsible for half of the consumption. Cooling costs can be reduced by intelligent scheduling, and in our case through virtual machines migrations. In this paper, we propose a dynamic reconfiguration based on evolution of temperatures and load of the servers. The idea is to share heat production to reduce cooling costs and consolidate the workload when possible to reduce servers costs. The challenge resides in satisfying these opposite objectives. We tested our algorithm on an experimental test bed, and achieve to cap the temperature of the data enter room while not forgetting to optimize the server use, and without impacting on applications performance

    ExaMon-X: a Predictive Maintenance Framework for Automatic Monitoring in Industrial IoT Systems

    Get PDF
    In recent years, the Industrial Internet of Things (IIoT) has led to significant steps forward in many industries, thanks to the exploitation of several technologies, ranging from Big Data processing to Artificial Intelligence (AI). Among the various IIoT scenarios, large-scale data centers can reap significant benefits from adopting Big Data analytics and AI-boosted approaches since these technologies can allow effective predictive maintenance. However, most of the off-the-shelf currently available solutions are not ideally suited to the HPC context, e.g., they do not sufficiently take into account the very heterogeneous data sources and the privacy issues which hinder the adoption of the cloud solution, or they do not fully exploit the computing capabilities available in loco in a supercomputing facility. In this paper, we tackle this issue, and we propose an IIoT holistic and vertical framework for predictive maintenance in supercomputers. The framework is based on a big lightweight data monitoring infrastructure, specialized databases suited for heterogeneous data, and a set of high-level AI-based functionalities tailored to HPC actors’ specific needs. We present the deployment and assess the usage of this framework in several in-production HPC systems

    Holistic Resource Management for Sustainable and Reliable Cloud Computing:An Innovative Solution to Global Challenge

    Get PDF
    Minimizing the energy consumption of servers within cloud computing systems is of upmost importance to cloud providers towards reducing operational costs and enhancing service sustainability by consolidating services onto fewer active servers. Moreover, providers must also provision high levels of availability and reliability, hence cloud services are frequently replicated across servers that subsequently increases server energy consumption and resource overhead. These two objectives can present a potential conflict within cloud resource management decision making that must balance between service consolidation and replication to minimize energy consumption whilst maximizing server availability and reliability, respectively. In this paper, we propose a cuckoo optimization-based energy-reliability aware resource scheduling technique (CRUZE) for holistic management of cloud computing resources including servers, networks, storage, and cooling systems. CRUZE clusters and executes heterogeneous workloads on provisioned cloud resources and enhances the energy-efficiency and reduces the carbon footprint in datacenters without adversely affecting cloud service reliability. We evaluate the effectiveness of CRUZE against existing state-of-the-art solutions using the CloudSim toolkit. Results indicate that our proposed technique is capable of reducing energy consumption by 20.1% whilst improving reliability and CPU utilization by 17.1% and 15.7% respectively without affecting other Quality of Service parameters

    Enabling stream processing for people-centric IoT based on the fog computing paradigm

    Get PDF
    The world of machine-to-machine (M2M) communication is gradually moving from vertical single purpose solutions to multi-purpose and collaborative applications interacting across industry verticals, organizations and people - A world of Internet of Things (IoT). The dominant approach for delivering IoT applications relies on the development of cloud-based IoT platforms that collect all the data generated by the sensing elements and centrally process the information to create real business value. In this paper, we present a system that follows the Fog Computing paradigm where the sensor resources, as well as the intermediate layers between embedded devices and cloud computing datacenters, participate by providing computational, storage, and control. We discuss the design aspects of our system and present a pilot deployment for the evaluating the performance in a real-world environment. Our findings indicate that Fog Computing can address the ever-increasing amount of data that is inherent in an IoT world by effective communication among all elements of the architecture

    MOSAIC: A Multi-Objective Optimization Framework for Sustainable Datacenter Management

    Full text link
    In recent years, cloud service providers have been building and hosting datacenters across multiple geographical locations to provide robust services. However, the geographical distribution of datacenters introduces growing pressure to both local and global environments, particularly when it comes to water usage and carbon emissions. Unfortunately, efforts to reduce the environmental impact of such datacenters often lead to an increase in the cost of datacenter operations. To co-optimize the energy cost, carbon emissions, and water footprint of datacenter operation from a global perspective, we propose a novel framework for multi-objective sustainable datacenter management (MOSAIC) that integrates adaptive local search with a collaborative decomposition-based evolutionary algorithm to intelligently manage geographical workload distribution and datacenter operations. Our framework sustainably allocates workloads to datacenters while taking into account multiple geography- and time-based factors including renewable energy sources, variable energy costs, power usage efficiency, carbon factors, and water intensity in energy. Our experimental results show that, compared to the best-known prior work frameworks, MOSAIC can achieve 27.45x speedup and 1.53x improvement in Pareto Hypervolume while reducing the carbon footprint by up to 1.33x, water footprint by up to 3.09x, and energy costs by up to 1.40x. In the simultaneous three-objective co-optimization scenario, MOSAIC achieves a cumulative improvement across all objectives (carbon, water, cost) of up to 4.61x compared to the state-of-the-arts

    Optimization of Sensor Location in Data Center

    Get PDF
    The increase demand of data center has been increase significantly due to the rapid growth ICT technology. As a result this brings along the “green” issues in data center such as energy consumption, heat generation and cooling requirements. These issues can be addressed by “Green of/by IT” in the context of operating costs as well as the environmental impacts. To accommodate temperature monitoring system in every corner of data center is cost inefficient. Optimized location for sensor placement is needed to be determined, to reduce the monitoring cost. It needs to be decided which locations to observe in order to most effective results, at minimum cost. Furthermore, it is argued that in depth knowledge of the historical data of the data center’s highly dynamic operating condition will lead to a better management of data center resources. Thus, this project aims to create a wireless temperature monitoring system with location optimization algorithm to optimize temperature sensors deployment/locations. Furthermore, real-time temperature data collection and monitoring can be used to predict the next state of the temperature to detect potential anomaly in heat generation in the data center. Thus quick response for cooling can be invoked – Green by IT

    Cloud Computing cost and energy optimization through Federated Cloud SoS

    Get PDF
    2017 Fall.Includes bibliographical references.The two most significant differentiators amongst contemporary Cloud Computing service providers have increased green energy use and datacenter resource utilization. This work addresses these two issues from a system's architectural optimization viewpoint. The proposed approach herein, allows multiple cloud providers to utilize their individual computing resources in three ways by: (1) cutting the number of datacenters needed, (2) scheduling available datacenter grid energy via aggregators to reduce costs and power outages, and lastly by (3) utilizing, where appropriate, more renewable and carbon-free energy sources. Altogether our proposed approach creates an alternative paradigm for a Federated Cloud SoS approach. The proposed paradigm employs a novel control methodology that is tuned to obtain both financial and environmental advantages. It also supports dynamic expansion and contraction of computing capabilities for handling sudden variations in service demand as well as for maximizing usage of time varying green energy supplies. Herein we analyze the core SoS requirements, concept synthesis, and functional architecture with an eye on avoiding inadvertent cascading conditions. We suggest a physical architecture that diminishes unwanted outcomes while encouraging desirable results. Finally, in our approach, the constituent cloud services retain their independent ownership, objectives, funding, and sustainability means. This work analyzes the core SoS requirements, concept synthesis, and functional architecture. It suggests a physical structure that simulates the primary SoS emergent behavior to diminish unwanted outcomes while encouraging desirable results. The report will analyze optimal computing generation methods, optimal energy utilization for computing generation as well as a procedure for building optimal datacenters using a unique hardware computing system design based on the openCompute community as an illustrative collaboration platform. Finally, the research concludes with security features cloud federation requires to support to protect its constituents, its constituents tenants and itself from security risks

    WattScope: Non-intrusive Application-level Power Disaggregation in Datacenters

    Full text link
    Datacenter capacity is growing exponentially to satisfy the increasing demand for emerging computationally-intensive applications, such as deep learning. This trend has led to concerns over datacenters' increasing energy consumption and carbon footprint. The basic prerequisite for optimizing a datacenter's energy- and carbon-efficiency is accurately monitoring and attributing energy consumption to specific users and applications. Since datacenter servers tend to be multi-tenant, i.e., they host many applications, server- and rack-level power monitoring alone does not provide insight into their resident applications' energy usage and carbon emissions. At the same time, current application-level energy monitoring and attribution techniques are intrusive: they require privileged access to servers and require coordinated support in hardware and software, which is not always possible in cloud. To address the problem, we design WattScope, a system for non-intrusively estimating the power consumption of individual applications using external measurements of a server's aggregate power usage without requiring direct access to the server's operating system or applications. Our key insight is that, based on an analysis of production traces, the power characteristics of datacenter workloads, e.g., low variability, low magnitude, and high periodicity, are highly amenable to disaggregation of a server's total power consumption into application-specific values. WattScope adapts and extends a machine learning-based technique for disaggregating building power and applies it to server- and rack-level power meter measurements in data centers. We evaluate WattScope's accuracy on a production workload and show that it yields high accuracy, e.g., often <10% normalized mean absolute error, and is thus a potentially useful tool for datacenters in externally monitoring application-level power usage.Comment: Accepted to Performance'2

    Self-Organizing maps for detecting abnormal thermal behavior in data centers

    Get PDF
    The increasing success of Cloud Computing applications and online services has contributed to the unsustainability of data center facilities in terms of energy consumption. Higher resource demand has increased the electricity required by computation and cooling resources, leading to power shortages and outages, specially in urban infrastructures. Current energy reduction strategies for Cloud facilities usually disregard the data center topology, the contribution of cooling consumption and the scalability of optimization strategies. Our work tackles the energy challenge by proposing a temperature-aware {VM} allocation policy based on a {Trust-and-Reputation} System ({TRS}). A {TRS} meets the requirements for inherently distributed environments such as data centers, and allows the implementation of autonomous and scalable {VM} allocation techniques. For this purpose, we model the relationships between the different computational entities, synthesizing this information in one single metric. This metric, called reputation, would be used to optimize the allocation of {VMs} in order to reduce energy consumption. We validate our approach with a state-of-the-art Cloud simulator using real Cloud traces. Our results show considerable reduction in energy consumption, reaching up to 46.16\% savings in computing power and 17.38\% savings in cooling, without {QoS} degradation while keeping servers below thermal redlining. Moreover, our results show the limitations of the {PUE} ratio as a metric for energy efficiency. To the best of our knowledge, this paper is the first approach in combining {Trust-and-Reputation} systems with Cloud Computing {VM} allocation
    • …
    corecore