2,852 research outputs found

    サーバクラスタでの低消費電力化のための移行モデルの研究

    Get PDF
    博士(工学)法政大学 (Hosei University

    Green Cloud - Load Balancing, Load Consolidation using VM Migration

    Get PDF
    Recently, cloud computing is a new trend emerging in computer technology with a massive demand from the clients. To meet all requirements, a lot of cloud data centers have been constructed since 2008 when Amazon published their cloud service. The rapidly growing data center leads to the consumption of a tremendous amount of energy even cloud computing has better improved in the performance and energy consumption, but cloud data centers still absorb an immense amount of energy. To raise company’s income annually, the cloud providers start considering green cloud concepts which gives an idea about how to optimize CPU’s usage while guaranteeing the quality of service. Many cloud providers are paying more attention to both load balancing and load consolidation which are two significant components of a cloud data center. Load balancing is taken into account as a vital part of managing income demand, improving the cloud system’s performance. Live virtual machine migration is a technique to perform the dynamic load balancing algorithm. To optimize the cloud data center, three issues are considered: First, how does the cloud cluster distribute the virtual machine (VM) requests from clients to all physical machine (PM) when each computer has a different capacity. Second, what is the solution to make CPU’s usage of all PMs to be nearly equal? Third, how to handle two extreme scenarios: rapidly rising CPU’s usage of a PM due to sudden massive workload requiring VM migration immediately and resources expansion to respond to substantial cloud cluster through VM requests. In this chapter, we provide an approach to work with those issues in the implementation and results. The results indicated that the performance of the cloud cluster was improved significantly. Load consolidation is the reverse process of load balancing which aims to provide sufficient cloud servers to handle the client requests. Based on the advance of live VM migration, cloud data center can consolidate itself without interrupting the cloud service, and superfluous PMs are turned to save mode to reduce the energy consumption. This chapter provides a solution to approach load consolidation including implementation and simulation of cloud servers

    On the feasibility of collaborative green data center ecosystems

    Get PDF
    The increasing awareness of the impact of the IT sector on the environment, together with economic factors, have fueled many research efforts to reduce the energy expenditure of data centers. Recent work proposes to achieve additional energy savings by exploiting, in concert with customers, service workloads and to reduce data centers’ carbon footprints by adopting demand-response mechanisms between data centers and their energy providers. In this paper, we debate about the incentives that customers and data centers can have to adopt such measures and propose a new service type and pricing scheme that is economically attractive and technically realizable. Simulation results based on real measurements confirm that our scheme can achieve additional energy savings while preserving service performance and the interests of data centers and customers.Peer ReviewedPostprint (author's final draft

    Single system image: A survey

    Get PDF
    Single system image is a computing paradigm where a number of distributed computing resources are aggregated and presented via an interface that maintains the illusion of interaction with a single system. This approach encompasses decades of research using a broad variety of techniques at varying levels of abstraction, from custom hardware and distributed hypervisors to specialized operating system kernels and user-level tools. Existing classification schemes for SSI technologies are reviewed, and an updated classification scheme is proposed. A survey of implementation techniques is provided along with relevant examples. Notable deployments are examined and insights gained from hands-on experience are summarized. Issues affecting the adoption of kernel-level SSI are identified and discussed in the context of technology adoption literature

    Dynamic Load Balancing Based on Live Virtual Machine Migration

    Get PDF
    Recently, cloud computing is a new trend emerging in computer technology with a huge demand from the clients, which leads to the consumption of a tremendous amount of energy. Load balancing is taken into account as a vital part of managing income demand, improving the cloud system\u27s performance and reducing the energy cost. Live virtual machine migration is a technique to perform the dynamic load balancing algorithm. To optimize the cloud cluster, there are three issues to consider: First, how does the cloud cluster distribute the virtual machine (VM) requests from clients to all physical machine (PM) when each machine has a different capacity. Second, what is the solution to make CPU’s usage of all PMs to be nearly equal. Third, how to handle two extreme scenarios: rapidly rising CPU\u27s usage of a PM due to sudden heavy workload requiring VM migration immediately and resources expansion to respond to heavy cloud cluster through VM requests. We also provide the implementation and results of this approach, which the performance of the cloud cluster is improved significantly

    Simple Estimation Model and Energy-efficient Virtual Machine Migration Algorithm in a Server Cluster

    Get PDF
    In this thesis, we propose a virtual machine migration approach to reducing the electric energy consumption of servers. In our previous algorithms, one virtual machine migrates from a host server to a guest server. While the electric energy consumption of servers can be reduced by migrating some number b of processes, there might not be a virtual machine with the same number b of processes on a host server. In this thesis, we newly propose an ISEAM2T algorithm where multiple virtual machines can migrate from a host server to a guest server. Here, multiple virtual machines on a host server are selected so that the total number of processes on the virtual machines can be more easily adjusted to the optimal number b of processes. In the evaluation, we show the total electric energy consumption and active time of the servers can be reduced in the proposed algorithm

    Energy-Efficient Virtual Machine Placement using Enhanced Firefly Algorithm

    Get PDF
    The consolidation of the virtual machines (VMs) helps to optimise the usage of resources and hence reduces the energy consumption in a cloud data centre. VM placement plays an important part in the consolidation of the VMs. The researchers have developed various algorithms for VM placement considering the optimised energy consumption. However, these algorithms lack the use of exploitation mechanism efficiently. This paper addresses VM placement issues by proposing two meta-heuristic algorithms namely, the enhanced modified firefly algorithm (MFF) and the hierarchical cluster based modified firefly algorithm (HCMFF), presenting the comparative analysis relating to energy optimisation. The comparisons are made against the existing honeybee (HB) algorithm, honeybee cluster based technique (HCT) and the energy consumption results of all the participating algorithms confirm that the proposed HCMFF is more efficient than the other algorithms. The simulation study shows that HCMFF consumes 12% less energy than honeybee algorithm, 6% less than HCT algorithm and 2% less than original firefly. The usage of the appropriate algorithm can help in efficient usage of energy in cloud computing

    Hybrid heuristic algorithm for better energy optimization and resource utilization in cloud computing

    Get PDF
    Energy-efficient execution of the scientific workflow is a challenging task in cloud computing that demands high-performance computing to process growing datasets. Due to the interdependency of tasks in the scientific workflow applications, energy-efficient resource allocation is vital for large-scale applications running on heterogeneous physical machines. Thus, this paper proposes a Hybrid Heuristic algorithm based Energy-efficient cloud Computing service (HH-ECO) that offers a significant solution for resource allocation, task scheduling, and optimization of scientific workflows. To ensure the energy-efficient execution, the HH-ECO focuses on executing non-dominant workflow tasks through adaptive mutation and energy-aware migration strategy. HH-ECO adopts the Chaotic based Particle Swarm Optimization (C-PSO) principle to optimize the resource allocation, task scheduling, and resource migration by generating the global best plans without local convergence. C-PSO with adaptive mutation avoids the deterioration of global optima while finding the best host to place the virtual machine and ensures an appropriate resource allocation plan. By considering the workflow task precedence relationships during C-PSO based task scheduling, the novel hybrid heuristic method efficiently solves the multi-objective combinatorial optimization problem without dominance among the workflow tasks. The Cloudsim based simulation study delivers superior results compared to the existing methods such as the Hybrid Heuristic Workflow Scheduling algorithm (HHWS) and Distributed Dynamic VM Management (DDVM). The proposed approach significantly improves the optimal makespan to 38.27% and energy conservation to 38.06% compared to the existing methods
    corecore