261 research outputs found

    Strategies for Increased Energy Awareness in Cloud Federations

    Get PDF
    This chapter first identifies three scenarios that current energy aware cloud solutions cannot handle as isolated IaaS, but their federative efforts offer opportunities to be explored. These scenarios are centered around: (i) multi-datacenter cloud operator, (ii) commercial cloud federations, (iii) academic cloud federations. Based on these scenarios, we identify energy-aware scheduling policies to be applied in the management solutions of cloud federations. Among others, these policies should consider the behavior of independent administrative domains, the frequently contradicting goals of the participating clouds and federation wide energy consumption

    A Cloud-Oriented Green Computing Architecture for E-Learning Applications

    Get PDF
    Cloud computing is a highly scalable and cost-effective infrastructure for running Web applications. E-learning or e-Learning is one of such Web application has increasingly gained popularity in the recent years, as a comprehensive medium of global education system/training systems. The development of e-Learning Application within the cloud computing environment enables users to access diverse software applications, share data, collaborate more easily, and keep their data safely in the infrastructure. However, the growing demand of Cloud infrastructure has drastically increased the energy consumption of data centers, which has become a critical issue. High energy consumption not only translates to high operational cost, which reduces the profit margin of Cloud providers, but also leads to high carbon emissions which is not environmentally friendly. Hence, energy-efficient solutions are required to minimize the impact of Cloud-Oriented E-Learning on the environment. E-learning methods have drastically changed the educational environment and also reduced the use of papers and ultimately reduce the production of carbon footprint. E-learning methodology is an example of Green computing. Thus, in this paper, it is proposed a Cloud-Oriented Green Computing Architecture for eLearning Applications (COGALA). The e-Learning Applications using COGALA can lower expenses, reduce energy consumption, and help organizations with limited IT resources to deploy and maintain needed software in a timely manner. This paper also discussed the implication of this solution for future research directions to enable Cloud-Oriented Green Computing

    Economic impact of energy saving techniques in cloud server

    Get PDF
    In recent years, lot of research has been carried in the field of cloud computing and distributed systems to investigate and understand their performance. Economic impact of energy consumption is of major concern for major companies. Cloud Computing companies (Google, Yahoo, Gaikai, ONLIVE, Amazon and eBay) use large data centers which are comprised of virtual computers that are placed globally and require a lot of power cost to maintain. Demand for energy consumption is increasing day by day in IT firms. Therefore, Cloud Computing companies face challenges towards the economic impact in terms of power costs. Energy consumption is dependent upon several factors, e.g., service level agreement, virtual machine selection techniques, optimization policies, workload types etc. We address a solution for the energy saving problem by enabling dynamic voltage and frequency scaling technique for gaming data centers. The dynamic voltage and frequency scaling technique is compared against non-power aware and static threshold detection techniques. This helps service providers to meet the quality of service and quality of experience constraints by meeting service level agreements. The CloudSim platform is used for implementation of the scenario in which game traces are used as a workload for testing the technique. Selection of better techniques can help gaming servers to save energy cost and maintain a better quality of service for users placed globally. The novelty of the work provides an opportunity to investigate which technique behaves better, i.e., dynamic, static or non-power aware. The results demonstrate that less energy is consumed by implementing a dynamic voltage and frequency approach in comparison with static threshold consolidation or non-power aware technique. Therefore, more economical quality of services could be provided to the end users

    epcAware: a game-based, energy, performance and cost efficient resource management technique for multi-access edge computing

    Get PDF
    The Internet of Things (IoT) is producing an extraordinary volume of data daily, and it is possible that the data may become useless while on its way to the cloud for analysis, due to longer distances and delays. Fog/edge computing is a new model for analyzing and acting on time-sensitive data (real-time applications) at the network edge, adjacent to where it is produced. The model sends only selected data to the cloud for analysis and long-term storage. Furthermore, cloud services provided by large companies such as Google, can also be localized to minimize the response time and increase service agility. This could be accomplished through deploying small-scale datacenters (reffered to by name as cloudlets) where essential, closer to customers (IoT devices) and connected to a centrealised cloud through networks - which form a multi-access edge cloud (MEC). The MEC setup involves three different parties, i.e. service providers (IaaS), application providers (SaaS), network providers (NaaS); which might have different goals, therefore, making resource management a defficult job. In the literature, various resource management techniques have been suggested in the context of what kind of services should they host and how the available resources should be allocated to customers’ applications, particularly, if mobility is involved. However, the existing literature considers the resource management problem with respect to a single party. In this paper, we assume resource management with respect to all three parties i.e. IaaS, SaaS, NaaS; and suggest a game theoritic resource management technique that minimises infrastructure energy consumption and costs while ensuring applications performance. Our empirical evaluation, using real workload traces from Google’s cluster, suggests that our approach could reduce up to 11.95% energy consumption, and approximately 17.86% user costs with negligible loss in performance. Moreover, IaaS can reduce up to 20.27% energy bills and NaaS can increase their costs savings up to 18.52% as compared to other methods

    Cloud-SEnergy: A bin-packing based multi-cloud service broker for energy efficient composition and execution of data-intensive applications

    Get PDF
    © 2018 Elsevier Inc. The over-reliance of today's world on information and communication technologies (ICT) has led to an exponential increase in data production, network traffic, and energy consumption. To mitigate the ecological impact of this increase on the environment, a major challenge that this paper tackles is how to best select the most energy efficient services from cross-continental competing cloud-based datacenters. This selection is addressed by our Cloud-SEnergy, a system that uses a bin-packing technique to generate the most efficient service composition plans. Experiments were conducted to compare Cloud-SEnergy's efficiency with 5 established techniques in multi-cloud environments (All clouds, Base cloud, Smart cloud, COM2, and DC-Cloud). The results gained from the experiments demonstrate a superior performance of Cloud-SEnergy which ranged from an average energy consumption reduction of 4.3% when compared to Based Cloud technique, to an average reduction of 43.3% when compared to All Clouds technique. Furthermore, the percentage reduction in the number of examined services achieved by Cloud-SEnergy ranged from 50% when compared to Smart Cloud and average of 82.4% when compared to Base Cloud. In term of run-time, Cloud-SEnergy resulted in average reduction which ranged from 8.5% when compared to DC-Cloud, to 28.2% run-time reduction when compared to All Clouds

    Cloud Computing cost and energy optimization through Federated Cloud SoS

    Get PDF
    2017 Fall.Includes bibliographical references.The two most significant differentiators amongst contemporary Cloud Computing service providers have increased green energy use and datacenter resource utilization. This work addresses these two issues from a system's architectural optimization viewpoint. The proposed approach herein, allows multiple cloud providers to utilize their individual computing resources in three ways by: (1) cutting the number of datacenters needed, (2) scheduling available datacenter grid energy via aggregators to reduce costs and power outages, and lastly by (3) utilizing, where appropriate, more renewable and carbon-free energy sources. Altogether our proposed approach creates an alternative paradigm for a Federated Cloud SoS approach. The proposed paradigm employs a novel control methodology that is tuned to obtain both financial and environmental advantages. It also supports dynamic expansion and contraction of computing capabilities for handling sudden variations in service demand as well as for maximizing usage of time varying green energy supplies. Herein we analyze the core SoS requirements, concept synthesis, and functional architecture with an eye on avoiding inadvertent cascading conditions. We suggest a physical architecture that diminishes unwanted outcomes while encouraging desirable results. Finally, in our approach, the constituent cloud services retain their independent ownership, objectives, funding, and sustainability means. This work analyzes the core SoS requirements, concept synthesis, and functional architecture. It suggests a physical structure that simulates the primary SoS emergent behavior to diminish unwanted outcomes while encouraging desirable results. The report will analyze optimal computing generation methods, optimal energy utilization for computing generation as well as a procedure for building optimal datacenters using a unique hardware computing system design based on the openCompute community as an illustrative collaboration platform. Finally, the research concludes with security features cloud federation requires to support to protect its constituents, its constituents tenants and itself from security risks

    Energy Efficient Virtual Machine Migration in Cloud Data Centers

    Get PDF
    Cloud computing services have been on the rise over the past few decades, which has led to an increase in the number of data centers worldwide which increasingly consume more and more amount of energy for their operation, leading to high carbon dioxide emissions and also high operation costs. Cloud computing infrastructures are designed to support the accessibility and deployment of various service oriented applications by the users. The resources are the major source of the power consumption in data centers along with air conditioning and cooling equipment. Moreover the energy consumption in the cloud is proportional to the resource utilization and data centers are almost the worlds highest consumers of electricity. It is therefore, the need of the hour to devise efficient consolidation schemes for the cloud model to minimize energy and increase Return of Investment(ROI) for the users by decreasing the operating costs. The consolidation problem is NP-complete in nature, which requires heuristic techniques to get a sub-optimal solution. The complexity of the problem increases with increase in cloud infrastructure. We have proposed a new consolidation scheme for the virtual machines(VMs) by improving the host overload detection phase of the scheme. The resulting scheme is effective in reducing the energy and the level of Service Level Agreement(SLA) violations both, to a considerable extent. For testing the performance of our implementation on cloud we need a simulation environment that can provide us an environment with system and behavioural modelling of the actual cloud computing components, and can generate results that can help us in the analysis so that we can deploy them on actual clouds. CloudSim is one such simulation toolkit that allows us to test and analyse our allocation and selection algorithms. In this thesis we have used CloudSim version 3.0.3 to test and analyse our policies and modifications in the current policies. The advantages of using CloudSim 3.0.3 is that it takes very less effort and time to implement cloud-based application and we can test the performance of application services in heterogeneous Cloud environments. The observations are validated by simulating the experiment using the CLoudSim framework and the data provided by PlanetLab

    Energy-efficient Virtual Machine Allocation Technique Using Flower Pollination Algorithm in Cloud Datacenter: A Panacea to Green Computing

    Get PDF
    Cloud computing has attracted significant interest due to the increasing service demands from organizations offloading computationally intensive tasks to datacenters. Meanwhile, datacenter infrastructure comprises hardware resources that consume high amount of energy and give out carbon emissions at hazardous levels. In cloud datacenter, Virtual Machines (VMs) need to be allocated on various Physical Machines (PMs) in order to minimize resource wastage and increase energy efficiency. Resource allocation problem is NP-hard. Hence finding an exact solution is complicated especially for large-scale datacenters. In this context, this paper proposes an Energy-oriented Flower Pollination Algorithm (E-FPA) for VM allocation in cloud datacenter environments. A system framework for the scheme was developed to enable energy-oriented allocation of various VMs on a PM. The allocation uses a strategy called Dynamic Switching Probability (DSP). The framework finds a near optimal solution quickly and balances the exploration of the global search and exploitation of the local search. It considers a processor, storage, and memory constraints of a PM while prioritizing energy-oriented allocation for a set of VMs. Simulations performed on MultiRecCloudSim utilizing planet workload show that the E-FPA outperforms the Genetic Algorithm for Power-Aware (GAPA) by 21.8%, Order of Exchange Migration (OEM) ant colony system by 21.5%, and First Fit Decreasing (FFD) by 24.9%. Therefore, E-FPA significantly improves datacenter performance and thus, enhances environmental sustainability
    corecore