22 research outputs found

    GreenHDFS: data-centric and cyber-physical energy management system for big data clouds

    Get PDF
    Explosion in Big Data has led to a rapid increase in the popularity of Big Data analytics. With the increase in the sheer volume of data that needs to be stored and processed, storage and computing demands of the Big Data analytics workloads are growing exponentially, leading to a surge in extremely large-scale Big Data cloud platforms, and resulting in burgeoning energy costs and environmental impact. The sheer size of Big Data lends it significant data movement inertia and that coupled with the network bandwidth constraints inherent in the cloud's cost-efficient and scale-out economic paradigm, makes data-locality a necessity for high performance in the Big Data environments. Instead of sending data to the computations as has been the norm, computations are sent to the data to take advantage of the higher data-local performance. The state-of-the-art run-time energy management techniques are job-centric in nature and rely on thermal- and energy-aware job placement, job consolidation, or job migration to derive energy costs savings. Unfortunately, data-locality requirement of the compute model limits the applicability of the state-of-the-art run-time energy management techniques as these techniques are inherently data-placement-agnostic in nature, and provide energy savings at significant performance impact in the Big Data environment. Big Data analytics clusters have moved away from shared network attached storage (NAS) or storage area network (SAN) model to completely clustered, commodity storage model that allows direct access path between the storage servers and the clients in interest of high scalability and performance. The underlying storage system distributes file chunks and replicas across the servers for high performance, load-balancing, and resiliency. However, with files distributed across all servers, any server may be participating in the reading, writing, or computation of a file chunk at any time. Such a storage model complicates scale-down based power-management by making it hard to generate significant periods of idleness in the Big Data analytics clusters. GreenHDFS is based on the observation that data needs to be a first-class object in energy management in the Big Data environments to allow high data access performance. GreenHDFS takes a novel data-centric, cyber-physical approach to reduce compute (i.e., server) and cooling operating energy costs. On the physical-side, GreenHDFS is cognizant that all-servers-are-not-alike in the Big Data analytics cloud and is aware of the variations in the thermal-profiles of the servers. On the cyber-side, GreenHDFS is aware that all-data-is-not-alike and knows the differences in the data-semantics (i.e., computational jobs arrival rate, size, popularity, and evolution life spans) of the Big Data placed in the Big Data analytics cloud. Armed with this cyber-physical knowledge, and coupled with its insights, predictive data models, and run-time information GreenHDFS does proactive, cyber-physical, thermal- and energy-aware file placement, and data-classification-driven scale-down, which implicitly results in thermal- and energy-aware job placement in the Big Data analytics cloud compute model. GreenHDFS's data-centric energy- and thermal-management approach results in a reduction in energy costs without any associated performance impact, allows scale-down of a subset of servers in spite of the unique challenges posed by Big Data analytics cloud to scale-down, and ensures thermal-reliability of the servers in the cluster. GreenHDFS evaluation results with one-month long real-world traces from a production Big Data analytics cluster at Yahoo! show up to 59% reduction in the cooling energy costs while performing 9x better than the state-of-the-art data-agnostic cooling techniques, up to a 26% reduction in the server operating energy costs, and significant reduction in the total cost of ownership (TCO) of the Big Data analytics cluster. GreenHDFS provides a software-based mechanism to increase energy-proportionality even with non-energy-proportional server components. Free-cooling or air- and water-side economization (i.e., use outside air or natural water resources to cool the data center) is gaining popularity as it can result in significant cooling energy costs savings. There is also a drive towards increasing the cooling set point of the cooling systems to make them more efficient. If the ambient temperature of the outside air or the cooling set point temperature is high, the inlet temperatures of the servers get high which reduces their ability to dissipate computational heat, resulting in an increase in server temperatures. The servers are rated to operate safely only with a certain temperature range, beyond which the failure rates increase. GreenHDFS considers the differences in the thermal-reliability-driven load-tolerance upper-bound of the servers in its predictive thermal-aware file placement and places file chunks in a manner that ensures that temperatures of servers don't exceed temperature upper-bound. Thus, by ensuring thermal-reliability at all times and by lowering the overall temperature of the servers, GreenHDFS enables data centers to enjoy energy-saving economizer mode for longer periods of time and also enables an increase in the cooling set point. There are a substantial number of data centers that still rely fully on traditional air-conditioning. These data centers can not always be retrofitted with the economizer modes or hot- and cold-aisle air containment as incorporation of the economizer and air containment may require space for duct-work, and heat exchangers which may not be available in the data center. Existing data centers may also not be favorably located geographically; air-side economization is more viable in geographic locations where ambient air temperatures are low for most part of the year and humidity is in the tolerable range. GreenHDFS provides a software-based approach to enhance the cooling-efficiency of such traditional data centers as it lowers the overall temperature in the cluster, makes the thermal-profile much more uniform, and reduces hot air recirculation, resulting in lowered cooling energy costs

    Evolutionary Neural Network Based Energy Consumption Forecast for Cloud Computing

    Get PDF
    The success of Hadoop, an open-source framework for massively parallel and distributed computing, is expected to drive energy consumption of cloud data centers to new highs as service providers continue to add new infrastructure, services and capabilities to meet the market demands. While current research on data center airflow management, HVAC (Heating, Ventilation and Air Conditioning) system design, workload distribution and optimization, and energy efficient computing hardware and software are all contributing to improved energy efficiency, energy forecast in cloud computing remains a challenge. This paper reports an evolutionary computation based modeling and forecasting approach to this problem. In particular, an evolutionary neural network is developed and structurally optimized to forecast the energy load of a cloud data center. The results, both in terms of forecasting speed and accuracy, suggest that the evolutionary neural network approach to energy consumption forecasting for cloud computing is highly promising

    Energy consumption in big data environments – a systematic mapping study

    Get PDF
    Big Data is a term that describes a large volume of structured and unstructured data. Big Data must be acquired, stored, analyzed and visualized by means of non-conventional methods requiring normally a big set of resources, which includes energy consumption. Although Big Data is not new as a phenomenom, its explosion of the interest in literature is recent and its study in new scenarios presents several gaps. On the other hand, Green IT is also a growing field in computing, given the increasing role of IT in energy consumption in the world. Green IT is aimed to reduce IT-related energy consumption and overall IT environmental impact. In order to investigate the reported initiatives regarding the Big Data and Green IT with a focus of energy consumption, the authors conducted a systematic mapping on the topic. The search strategy which was used resulted in 28 relevant studies which were relevant to the topic. We found that a majority of the studies performed present algorithms designed to reduce the energy consumption in data centres. The rest of the studies present benchmarks and energy measurements, reviews, proposals of hardware-based solutions, as well as studies which give an overview of one or more aspects on Big Data.publishedVersio

    Efficient Mapping of Large-scale Data under Heterogeneous Big Data Computing Systems

    Get PDF
    Hadoop biological systems become progressively significant for professionals of huge scale information examination, they likewise acquire huge energy cost. This pattern is dynamic up the requirement for planning energy-effective Hadoop clusters so as to lessen the operational costs and the carbon emanation related with its energy utilization. Be that as it may, in spite of broad investigations of the issue, existing methodologies for energy proficiency have not completely measured the heterogeneity of both workloads. So that here enhancing the model by find that heterogeneity-unaware task task methodologies are hindering to both execution and energy effectiveness of Hadoop clusters. Our perception demonstrates that even heterogeneity-mindful methods that intend to decrease the job fulfillment time don't ensure a decrease in energy utilization of heterogeneous machines. We propose E-Ant which plans to get better the general energy utilization in a heterogeneous Hadoop group without giving up job execution. It adaptively plans heterogeneous workloads on energy-effective machines. E-Ant utilizes a subterranean insect state improvement approach that creates task assignment arrangements dependent on the input of each jobs energy utilization by Tasktrackers and also we incorporate DVFS method with E-Ant to further improve the energy proficiency

    Distributed storage optimization using multi-agent systems in Hadoop

    Get PDF
    Understanding data and extracting information from it are the main objectives of data science, especially when it comes to big data. To achieve these goals, it is necessary to collect and process massive data sets, arriving at the system in different formats at great velocity. The Big Data era has brought us new challenges in data storage and management, and existing state-ofthe-art data storage and processing tools are poised to meet the challenges while posing challenges to the next generation of data. Big Data storage optimization is essential for improving the overall efficiency of Big Data systems by maximizing the use of storage resources. It also reduces the energy consumption of Big Data systems, resulting in financial savings, environmental protection, and improved system performance. Hadoop provides a solution for storing and analysing large quantities of data. However, Hadoop can encounter storage management problems due to its distributed nature and the management of large volumes of data. In order to meet future challenges, the system needs to intelligently manage its storage system. The use of a multi-agent system presents a promising approach for efficiently managing hot and cold data in HDFS. These systems offer a flexible, distributed solution for solving complex problems. This work proposes an approach based on a multi-agent system capable of gathering information on data access activity in the HDFS cluster. Using this information, it classifies data according to its temperature (hot or cold) and makes decisions about data replication based on its classification. In addition, it compresses unused data to manage resources efficiently and reduce storage space usage

    Energy policies for data-center monolithic schedulers

    Get PDF
    Cloud computing and data centers that support this paradigm are rapidly evolving in order to satisfy new demands. These ever-growing needs represent an energy-related challenge to achieve sustainability and cost reduction. In this paper, we define an expert and intelligent system that applies various en ergy policies. These policies are employed to maximize the energy-efficiency of data-center resources by simulating a realistic environment and heterogeneous workload in a trustworthy tool. An environmental and economic impact of around 20% of energy consumption can be saved in high-utilization scenarios without exerting any noticeable impact on data-center performance if an adequate policy is applied

    Energy Efficient Data-Intensive Computing With Mapreduce

    Get PDF
    Power and energy consumption are critical constraints in data center design and operation. In data centers, MapReduce data-intensive applications demand significant resources and energy. Recognizing the importance and urgency of optimizing energy usage of MapReduce applications, this work aims to provide instrumental tools to measure and evaluate MapReduce energy efficiency and techniques to conserve energy without impacting performance. Energy conservation for data-intensive computing requires enabling technology to provide detailed and systemic energy information and to identify in the underlying system hardware and software. To address this need, we present eTune, a fine-grained, scalable energy profiling framework for data-intensive computing on large-scale distributed systems. eTune leverages performance monitoring counters (PMCs) on modern computer components and statistically builds power-performance correlation models. Using learned models, eTune augments direct measurement with a software-based power estimator that runs on compute nodes and reports power at multiple levels including node, core, memory, and disks with high accuracy. Data-intensive computing differs from traditional high performance computing as most execution time is spent in moving data between storage devices, nodes, and components. Since data movements are potential performance and energy bottlenecks, we propose an analysis framework with methods and metrics for evaluating and characterizing costly built-in MapReduce data movements. The revealed data movement energy characteristics can be exploited in system design and resource allocation to improve data-intensive computing energy efficiency. Finally, we present an optimization technique that targets inefficient built-in MapReduce data movements to conserve energy without impacting performance. The optimization technique allocates the optimal number of compute nodes to applications and dynamically schedules processor frequency during its execution based on data movement characteristics. Experimental results show significant energy savings, though improvements depend on both workload characteristics and policies of resource and dynamic voltage and frequency scheduling. As data volume doubles every two years and more data centers are put into production, energy consumption is expected to grow further. We expect these studies provide direction and insight in building more energy efficient data-intensive systems and applications, and the tools and techniques are adopted by other researchers for their energy efficient studies

    Power Management in Heterogeneous MapReduce Cluster

    Get PDF
    The growing expenses of power in data centers as compared to the operation costs has been a concern for the past several decades. It has been predicted that without an intervention, the energy cost will soon outgrow the infrastructure and operation cost. Therefore, it is of great importance to make data center clusters more energy efficient which is critical for avoiding system overheating and failures. In addition, energy inefficiency causes not only the loss of capital but also environmental pollution. Various Power Management(PM) strategies have been developed over the years to make system more energy efficient and to counteract the sharply rising cost of electricity. However, it is still a challenge to make the system both power efficient and computation efficient due to many underlying system constraints. In this thesis, we investigate the Power Management technique in heterogeneous MapReduce clusters while also maintaining the required system QoS (Quality of Service). For a cluster that supports MapReduce jobs, it is necessary to develop a PM technique that also considers the data availability. We develop our PM strategy by exploiting the fact that the servers in the system are underutilized most of the time. Hence, we first develop a model of our testbed and study how the server utilization levels affect the power consumption and the system throughput. With the established models, we form and solve the power optimization problem for heterogeneous MadReduce clusters where we control the server utilization levels intelligently to minimize the total power consumption. We have conducted simulations and shown the power savings achieved using our PM technique. Then we validate some of our simulation results by running experiments in a real testbed. Our simulation and experimental data have shown that our PM strategy works well for heterogeneous MapReduce clusters which consists of different power efficient and inefficient servers. Adviser: Ying L

    Big Data – A State-of-the-Art

    Get PDF
    The term Big Data has an increased and tautological occurrence in scientific publications. It is of interest how and whether the data provisioning is able to support enterprises in the handling and value creation of this emerging issue. Considering the amount of growing publication and the fuzzy nature of this term, an overview is requested to avoid duplications to gain relevant findings and to identify potential research gaps. To address this issue, a general literature review is accomplished, which extrapolates and clusters discussed research fields and potential gaps. It becomes apparent that a huge part of the research is technical driven. Moreover, no identified paper addresses the research area of functional data provisioning. This initiates further investigations to discuss whether Big Data itself negate such intention or research has missed it and improvements regarding Big Data are possible

    Energy and performance-aware scheduling and shut-down models for efficient cloud-computing data centers.

    Get PDF
    This Doctoral Dissertation, presented as a set of research contributions, focuses on resource efficiency in data centers. This topic has been faced mainly by the development of several energy-efficiency, resource managing and scheduling policies, as well as the simulation tools required to test them in realistic cloud computing environments. Several models have been implemented in order to minimize energy consumption in Cloud Computing environments. Among them: a) Fifteen probabilistic and deterministic energy-policies which shut-down idle machines; b) Five energy-aware scheduling algorithms, including several genetic algorithm models; c) A Stackelberg game-based strategy which models the concurrency between opposite requirements of Cloud-Computing systems in order to dynamically apply the most optimal scheduling algorithms and energy-efficiency policies depending on the environment; and d) A productive analysis on the resource efficiency of several realistic cloud–computing environments. A novel simulation tool called SCORE, able to simulate several data-center sizes, machine heterogeneity, security levels, workload composition and patterns, scheduling strategies and energy-efficiency strategies, was developed in order to test these strategies in large-scale cloud-computing clusters. As results, more than fifty Key Performance Indicators (KPI) show that more than 20% of energy consumption can be reduced in realistic high-utilization environments when proper policies are employed.Esta Tesis Doctoral, que se presenta como compendio de artículos de investigación, se centra en la eficiencia en la utilización de los recursos en centros de datos de internet. Este problema ha sido abordado esencialmente desarrollando diferentes estrategias de eficiencia energética, gestión y distribución de recursos, así como todas las herramientas de simulación y análisis necesarias para su validación en entornos realistas de Cloud Computing. Numerosas estrategias han sido desarrolladas para minimizar el consumo energético en entornos de Cloud Computing. Entre ellos: 1. Quince políticas de eficiencia energética, tanto probabilísticas como deterministas, que apagan máquinas en estado de espera siempre que sea posible; 2. Cinco algoritmos de distribución de tareas que tienen en cuenta el consumo energético, incluyendo varios modelos de algoritmos genéticos; 3. Una estrategia basada en la teoría de juegos de Stackelberg que modela la competición entre diferentes partes de los centros de datos que tienen objetivos encontrados. Este modelo aplica dinámicamente las estrategias de distribución de tareas y las políticas de eficiencia energética dependiendo de las características del entorno; y 4. Un análisis productivo sobre la eficiencia en la utilización de recursos en numerosos escenarios de Cloud Computing. Una nueva herramienta de simulación llamada SCORE se ha desarrollado para analizar las estrategias antes mencionadas en clústers de Cloud Computing de grandes dimensiones. Los resultados obtenidos muestran que se puede conseguir un ahorro de energía superior al 20% en entornos realistas de alta utilización si se emplean las estrategias de eficiencia energética adecuadas. SCORE es open source y puede simular diferentes centros de datos con, entre otros muchos, los siguientes parámetros: Tamaño del centro de datos; heterogeneidad de los servidores; tipo, composición y patrones de carga de trabajo, estrategias de distribución de tareas y políticas de eficiencia energética, así como tres gestores de recursos centralizados: Monolítico, Two-level y Shared-state. Como resultados, esta herramienta de simulación arroja más de 50 Key Performance Indicators (KPI) de rendimiento general, de distribucin de tareas y de energía.Premio Extraordinario de Doctorado U
    corecore