881,074 research outputs found

    Energy efficient task scheduling in data center

    Get PDF
    First of all, I am thankful to God for his blessings and showing me the right direction. With His mercy, it has been made possible for me to reach so far. Foremost, I would like to express my sincere gratitude to my advisor Prof. Durga Prasad Mohapatra for the continuous support of my M.Tech study and research, for his patience, motivation, enthusiasm, and immense knowledge. I am thankful for her continual support, encouragement, and invaluable suggestion. His guidance helped me in all the time of research and writing of this thesis. I could not have imagined having a better advisor and mentor for my M.Tech study. Besides my advisor, I extend my thanks to our HOD, Prof. S. K. Rath and Prof. B. D. Sahoo for their valuable advices and encouragement. I express my gratitude to all the sta members of Computer Science and Engineering Department for providing me all the facilities required for the completion of my thesis work. I would like to say thanks to all my friends especially Dilip Kumar, Alok Pandey for their support. Last but not the least I am highly grateful to all my family members for their inspiration and ever encouraging moral support, which enables me to purse my studies

    Energy-Efficient Flow Scheduling and Routing with Hard Deadlines in Data Center Networks

    Full text link
    The power consumption of enormous network devices in data centers has emerged as a big concern to data center operators. Despite many traffic-engineering-based solutions, very little attention has been paid on performance-guaranteed energy saving schemes. In this paper, we propose a novel energy-saving model for data center networks by scheduling and routing "deadline-constrained flows" where the transmission of every flow has to be accomplished before a rigorous deadline, being the most critical requirement in production data center networks. Based on speed scaling and power-down energy saving strategies for network devices, we aim to explore the most energy efficient way of scheduling and routing flows on the network, as well as determining the transmission speed for every flow. We consider two general versions of the problem. For the version of only flow scheduling where routes of flows are pre-given, we show that it can be solved polynomially and we develop an optimal combinatorial algorithm for it. For the version of joint flow scheduling and routing, we prove that it is strongly NP-hard and cannot have a Fully Polynomial-Time Approximation Scheme (FPTAS) unless P=NP. Based on a relaxation and randomized rounding technique, we provide an efficient approximation algorithm which can guarantee a provable performance ratio with respect to a polynomial of the total number of flows.Comment: 11 pages, accepted by ICDCS'1

    ENERGY EFFICIENT LOAD BALANCING FOR CLOUD DATA CENTER

    Get PDF
    Cloud computing is the latest trend in large-scale distributed computing. It provides diverse services on demand to distributive resources such asservers, software, and databases. One of the challenging problems in cloud data centers is to manage the load of different reconfigurable virtual machines over one another. Thus, in the near future of cloud computing field, providing a mechanism for efficient resource management will be very significant. Many load balancing algorithms have been already implemented and executed to manage the resources efficiently and adequately. The objective of this paper is to analyze shortcomings of existing algorithms and implement a new algorithm which will give optimized load balancingresult

    Energy Efficient Servers: Blueprints for Data Center Optimization

    Get PDF
    Energy Efficient Servers: Blueprints for Data Center Optimization introduces engineers and IT professionals to the power management technologies and techniques used in energy efficient servers. The book includes a deep examination of different features used in processors, memory, interconnects, I/O devices, and other platform components. It outlines the power and performance impact of these features and the role firmware and software play in initialization and control. Using examples from cloud, HPC, and enterprise environments, the book demonstrates how various power management technologies are utilized across a range of server utilization. It teaches the reader how to monitor, analyze, and optimize their environment to best suit their needs. It shares optimization techniques used by data center administrators and system optimization experts at the world’s most advanced data centers

    Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    Full text link
    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2010), Las Vegas, USA, July 12-15, 201

    Energy Efficient Cloud Data Center

    Get PDF
    Cloud computing has quickly arrived like a deeply accepted computing model. Still,the exploration and investigation on cloud computing is at a premature phase. Cloud computing is facing distinct issues in the field of security, power consumption, software frameworks, QoS, and standardization.The anagement of efficient energy is one of the most challenging research issues. The key and central services of cloud computing system are the SaaS, PaaS, and IaaS. In this thesis, the model of energy efficient cloud data center is proposed. Cloud data center is the main part of the IaaS layer of a cloud computing system. It absorbs a big part of the aggregate energy of a cloud computing system. Our goal is to supply a better explaining of the design issues of energy manage-ment of the IaaS layer in the cloud computing system. Servers and processors are the main component of the data center. Virtualization technologies that are the key features of the cloud computing environment provide the ability for migration of VMs between physical servers of the cloud data centre to improve the energy efficiency. This is called dynamic server consolidation that has direct impact on service response time. Energy efficient cloud data center reduces the overall energy consumed by the data center. This results in, reduction of cost incurred by the data center, long life of hardware components, green IT environment, and making more user friendly. Many VM placement techniques, server consolidation techniques have been proposed. They do not show optimal solution in every circumstances. They show optimum result only for a certain data set. They did not consider both VM placement and its migration simultaneously. They did not attempt to minimize the VM migrations during server consolidation. Still, forceful consolidation can result in the performance degradation and may lead the SLA negligence. So, there is a trade-off between performance and energy. A number of heuristics, protocols and archi-tectures have explored and investigated for server consolidation using VM migration to reduce energy consumption. The primary objective is to minimize the overall energy consumption by servers without violating the SLA. Our proposed model and scheme show the better result at most of the data set. It is based on virtualization technique, VMs, their placement and their migration. Our study focuses on problems like huge amount of energy consumption by server and processor. So, here energy consumption is reduced without violating SLA and to meet certain level of QoS parameters. Server consolidation is performed with minimum number of VM migration. Here, maximum utilization of re-sources is tried to achieve, but utilization of resources is not compared with the existing scheme. Our scheme may show different better result for different configuration of the data center for the same data set. Problem is formulated as a knapsack problem. Pro-posed scheme inherits some feature from heuristics approach like BF, FF, BFD, and FFD.These are used for greedy-bin-packing problem. For simulation, input data set is taken as random value. These random values are general data set used in real scenario and by the existing scheme. From simulation, it is found that proposed model is achieving the desired objectives for a number of data set, and for another data set, some percentage loss of objectives is occurring

    Analysis of Barriers to Green Data Centers Implementation in Malaysia, using Interpretive Structural Modelling (ISM)

    Get PDF
    The data center market is expected to grow at 7% during 2022- 2027 and the market size is to reach RM2.0 billion in 2027. The high impact of ICT contributes to high energy consumption, which will impact the environment and greenhouse gas emissions. To reduce energy consumption is to introduce a green data centre. Implementing a green data center involves the introduction of energy-efficient measures for the data center. This will include using energy-efficient ICT equipment and energy-efficient facilities equipment, especially the HVAC system. The successful introduction of the green data center involves new technologies in which there will be barriers that need to be addressed. Based on the study, green awareness and the benefit of switching to green data centers is essential for green data center initiatives in Malaysia. &nbsp

    Calculating the minimum bounds of energy consumption for cloud networks

    Get PDF
    This paper is aiming at facilitating the energy-efficient operation of an integrated optical network and IT infrastructure. In this context we propose an energy-efficient routing algorithm for provisioning of IT services that originate from specific source sites and which need to be executed by suitable IT resources (e. g. data centers). The routing approach followed is anycast, since the requirement for the IT services is the delivery of results, while the exact location of the execution of the job can be freely chosen. In this scenario, energy efficiency is achieved by identifying the least energy consuming IT and network resources required to support the services, enabling the switching off of any unused network and IT resources. Our results show significant energy savings that can reach up to 55% compared to energy-unaware schemes, depending on the granularity with which a data center is able to switch on/off servers
    corecore