7 research outputs found

    Green IT Segment Analysis: An Academic Literature Review

    Get PDF
    Research on Green Information Technology (IT) is becoming a prevalent research theme in Green Information Systems (IS) research. This article provides a review of 98 papers published on Green IT between 2007−2013 to facilitate future research and to provide a retrospective analysis of existing knowledge and gaps thereof. While some researchers have discussed phenomena such as Green IT, motivation of Green IT and the Green IT adoption lifecycle, others have researched the importance of Green IT implementation within the organisational and individual level. Throughout the literature, scholars are trying to portray a constructive relationship between IT and the environment. Through our analysis, we can provide an assessment of the status of information systems literature on Green IT and, we provide taxonomy of segments of Green IT publications. Future research opportunities are identified based on the review

    A Dynamic Power Management Schema for Multi-Tier Data Centers

    Get PDF
    An issue of great concern as it relates to global warming is power consumption and efficient use of computers especially in large data centers. Data centers have an important role in IT infrastructures because of their huge power consumption. This thesis explores the sleep state of data centers' servers under specific conditions such as setup time and identifies optimal number of servers. Moreover, their potential to greatly increase energy efficiency in data centers. We use a dynamic power management policy based on a mathematical model. Our new methodology is based on the optimal number of servers required in each tier while increasing servers' setup time after sleep mode to reduce the power consumption. The Reactive approach is used to prove the validity of the results and energy efficiency by calculating the average power consumption of each server under specific sleep mode and setup time. We introduce a new methodology that uses average power consumption to calculate the Normalized-Performance-Per-Watt in order to evaluate the power efficiency. Our results indicate that the proposed schema is beneficial for data centers with high setup time

    Markov Decision Process Based Energy-Efficient On-Line Scheduling for Slice-Parallel Video Decoders on Multicore Systems

    Get PDF
    We consider the problem of energy-efficient on-line scheduling for slice-parallel video decoders on multicore systems. We assume that each of the processors are Dynamic Voltage Frequency Scaling (DVFS) enabled such that they can independently trade off performance for power, while taking the video decoding workload into account. In the past, scheduling and DVFS policies in multi-core systems have been formulated heuristically due to the inherent complexity of the on-line multicore scheduling problem. The key contribution of this report is that we rigorously formulate the problem as a Markov decision process (MDP), which simultaneously takes into account the on-line scheduling and per-core DVFS capabilities; the power consumption of the processor cores and caches; and the loss tolerant and dynamic nature of the video decoder's traffic. In particular, we model the video traffic using a Direct Acyclic Graph (DAG) to capture the precedence constraints among frames in a Group of Pictures (GOP) structure, while also accounting for the fact that frames have different display/decoding deadlines and non-deterministic decoding complexities. The objective of the MDP is to minimize long-term power consumption subject to a minimum Quality of Service (QoS) constraint related to the decoder's throughput. Although MDPs notoriously suffer from the curse of dimensionality, we show that, with appropriate simplifications and approximations, the complexity of the MDP can be mitigated. We implement a slice-parallel version of H.264 on a multiprocessor ARM (MPARM) virtual platform simulator, which provides cycle-accurate and bus signal-accurate simulation for different processors. We use this platform to generate realistic video decoding traces with which we evaluate the proposed on-line scheduling algorithm in Matlab

    Managing server energy and reducing operational cost for online service providers

    Get PDF
    The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter

    Multi-disciplinary Green IT Archival Analysis: A Pathway for Future Studies

    Get PDF
    With the growth of information technology (IT), there is a growing global concern about the environmental impact of such technologies. As such, academics in several research disciplines consider research on green IT a vibrant theme. While the disparate knowledge in each discipline is gaining substantial momentum, we need a consolidated multi-disciplinary view of the salient findings of each research discipline for green IT research to reach its full potential. We reviewed 390 papers published on green IT from 2007 to 2015 in three disciplines: computer science, information systems and management. The prevailing literature demonstrates the value of this consolidated approach for advancing our understanding on this complex global issue of environmental sustainability. We provide an overarching theoretical perspective to consolidate multi-disciplinary findings and to encourage information systems researchers to develop an effective cumulative tradition of research

    Factors affecting the adoption of green data centres in Nigeria.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.Green technology adoption is a reasonable effort that organisations, which are into data centres, should endorse due to the environmental crisis in the world concerning electronic waste and emission of harmful gasses, amongst other environmental concerns. Countries worldwide, especially the developed countries like the United States of America, have improved their data centres for environmental sustainability. However, most organisations in developing countries are yet to improve the level of environmental sustainability in the area of Information Technology. The adoption of green data centres in Nigeria is essential because it influences the environment. Anecdotal evidence suggests that most organisations in developing countries lack efforts to go green; this may be attributed to a lack of knowledge on reducing land space and technological components, ultimately affecting productivity. Various factors influence the adoption of green technology, and this study aims to determine these factors in the context of green data centres. This study discovered factors that affect the adoption of green data centres in Nigeria using a descriptive qualitative research approach. Interview questions were aligned to the technology organisational and environmental (TOE) framework. Thematic data analysis using NVivo software was used to find themes that show the factors affecting the adoption of green data centres in Nigeria. Results indicate a lack of awareness, technical difficultly, lack of management support and inadequate policies for green data centres, as predominant factors affecting green data centre adoption

    Allocation et réallocation de services pour les économies d'énergie dans les clusters et les clouds

    Get PDF
    L'informatique dans les nuages (cloud computing) est devenu durant les dernières années un paradigme dominant dans le paysage informatique. Son principe est de fournir des services décentralisés à la demande. La demande croissante pour ce type de service amène les fournisseurs de clouds à augmenter la taille de leurs infrastructures à tel point que les consommations d'énergie ainsi que les coûts associés deviennent très importants. Chaque fournisseur de service cloud doit répondre à des demandes différentes. C'est pourquoi au cours de cette thèse, nous nous sommes intéressés à la gestion des ressources efficace en énergie dans les clouds. Nous avons tout d'abord modélisé et étudié le problème de l'allocation de ressources initiale en fonction des services, en calculant des solutions approchées via des heuristiques, puis en les comparant à la solution optimale. Nous avons ensuite étendu notre modèle de ressources pour nous permettre d'avoir une solution plus globale, en y intégrant de l'hétérogénéité entre les machines et des infrastructures de refroidissement. Nous avons enfin validé notre modèle par simulation. Les services doivent faire face à différentes phases de charge, ainsi qu'à des pics d'utilisation. C'est pourquoi, nous avons étendu le modèle d'allocation de ressources pour y intégrer la dynamicité des requêtes et de l'utilisation des ressources. Nous avons mis en œuvre une infrastructure de cloud simulée, visant à contrôler l'exécution des différents services ainsi que le placement de ceux-ci. Ainsi notre approche permet de réduire la consommation d'énergie globale de l'infrastructure, ainsi que de limiter autant que possible les dégradations de performance.Cloud computing has become over the last years an important paradigm in the computing landscape. Its principle is to provide decentralized services and allows client to consume resources on a pay-as-you-go model. The increasing need for this type of service brings the service providers to increase the size of their infrastructures, to the extent that energy consumptions as well as operating costs are becoming important. Each cloud service provider has to provide for different types of requests. Infrastructure manager then have to host all the types of services together. That's why during this thesis, we tackled energy efficient resource management in the clouds. In order to do so, we first modeled and studied the initial service allocation problem, by computing approximated solutions given by heuristics, then comparing it to the optimal solution computed with a linear program solver. We then extended the model of resources to allow us to have a more global approach, by integrating the inherent heterogeneity of clusters and the cooling infrastructures. We then validated our model via simulation. Usually, the services must face different stages of workload, as well as utilization spikes. That's why we extended the model to include dynamicity of requests and resource usage, as well as the concept of powering on or off servers, or the cost of migrating a service from one host to another. We implemented a simulated cloud infrastructure, aiming at controlling the execution of the services as well as their placement. Thus, our approach enables the reduction of the global energy consumption of the infrastructure, and limits as much as possible degrading the performances
    corecore