11 research outputs found

    Minimizing Energy Consumption by Task Consolidation in Cloud Centers with Optimized Resource Utilization

    Get PDF
    Cloud computing is an emerging field of computation. As the data centers consume large amount of power, it increases the system overheads as well as the carbon dioxide emission increases drastically. The main aim is to maximize the resource utilization by minimizing the power consumption. However, the greatest usages of resources does not mean that there has been a right use of energy.  Various resources which are idle, also consumes a significant amount of energy. So we have to keep minimum resources idle. Current studies have shown that the power consumption due to unused computing resources is nearly 1 to 20%. So, the unused resources have been assigned with some of the tasks to utilize the unused period. In the present paper, it has been suggested that the energy saving with task consolidation which has been saved the energy by minimizing the number of idle resources in a cloud computing environment. It has been achieved far-reaching experiments to quantify the performance of the proposed algorithm. The same has also been compared with the FCFSMaxUtil and Energy aware Task Consolidation (ETC) algorithm. The outcomes have shown that the suggested algorithm surpass the FCFSMaxUtil and ETC algorithm in terms of the CPU utilization and energy consumption

    Load Balancing Algorithms in Cloud Computing Analysis and Performance Evaluation

    Get PDF
    Distributing the system workload and balancing all incoming requests among all processing nodes in cloud computing environments is one of the important challenges in today cloud computing world. Many load balancing algorithms and approaches have been proposed for distributed and cloud computing systems. In addition the broker policy for distributing the workload among different datacenters in a cloud environment is one of the important factors for improving the system performance. In this paper we present an analytical comparison for the combinations of VM load balancing algorithms and different broker policies. We evaluate these approaches by simulating on CloudAnalyst simulator and the final results are presented based on different parameters. The results of this research specify the best possible combinations

    Energy Aware Genetic Algorithm for Independent Task Scheduling in Heterogeneous Multi-Cloud Environment

    Get PDF
    Cloud datacentres contain a vast number of processors. The rapid expansion of cloud computing is resulting in massive energy usage and carbon emissions which has reported a substantial increase day by day. Consequently, the cloud service providers are looking for eco-friendly solutions. The energy consumption can be evaluated with an energy model, which identifies that, server energy consumption scales linearly with resource (cloud) utilization. This research provides an alternate solution to task scheduling problem which designs an optimized task schedule to minimize the makespan and energy consumptions in cloud datacenters. The proposed method is based on the principle of Genetic Algorithm (GA). In the context of task-scheduling using GA, chromosomal representation is considered as a schedule of set of independent tasks mapped with available cloud or machine in the proposed methodology. A fitness function is taken to optimize the overall execution time or makespan. Energy consumption is evaluated based on minimum makespan value. The proposed technique also tested upon synthesized and benchmark dataset which outperforms the conventional cloud task scheduling algorithms like Min-Min, Max-Min, and suffrage heuristics in heterogeneous multi-cloud system

    Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    Get PDF
    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better

    Modelling energy consumption of network transfers and virtual machine migration

    Get PDF
    Reducing energy consumption has become a key issue for data centres, not only because of economical benefits but also for environmental and marketing reasons. Therefore, assessing their energy consumption requires precise models. In the past years, many models targeting different hardware components, such as CPU, storage and network interface cards (NIC) have been proposed. However, most of them neglect energy consumption related to VM migration. Since VM migration is a network-intensive process, to accurately model its energy consumption we also need energy models for network transfers, comprising their complete software stacks with different energy characteristics. In this work, we present a comparative analysis of the energy consumption of the software stack of two of today's most used NICs in data centres, Ethernet and Infiniband. We carefully design for this purpose a set of benchmark experiments to assess the impact of different traffic patterns and interface settings on energy consumption. Using our benchmark results, we derive an energy consumption model for network transfers. Based on this model, we propose an energy consumption model for VM migration providing accurate predictions for paravirtualised VMs running on homogeneous hosts. We present a comprehensive analysis of our model on different machine sets and compare it with other models for energy consumption of VM migration, showing an improvement of up to 24% in accuracy, according to the NRMSE error metric. © 2015 Elsevier B.V

    Evaluating Energy Efficiency of Gigabit Ethernet and Infiniband Software Stacks in Data Centres

    Get PDF
    Reducing energy consumption has become a key issue for data centres, not only because of economical benefits but also for environmental and marketing reasons. Many approaches tackle this problem from the point of view of different hardware components, such as CPUs, storage and network interface cards (NIC). To this date, few works focused on the energy consumption of network transfers at the software level comprising their complete stacks with different energy characteristics, and the way the NIC selection impacts the energy consumption of applications. Since data centres often install multiple NICs on each node, investigating and comparing them at the software level has high potential to enhance the energy efficiency of applications on Cloud infrastructures. We present a comparative analysis of the energy consumption of the software stack of two of today's most used NICs in data centres, Ethernet and Infiniband. We carefully design for this purpose a set of benchmark experiments to assess the impact of different traffic patterns and interface settings on energy consumption. Using our benchmark results, we derive an energy consumption model for network transfers and evaluate its accuracy for a virtual machine migration scenario. Finally, we propose guidelines for NIC selection from an energy efficiency perspective for different application classes.(VLID)2215294Accepted versio

    Structural issues and energy efficiency in data centers

    Get PDF
    Mención Internacional en el título de doctorWith the rise of cloud computing, data centers have been called to play a main role in the Internet scenario nowadays. Despite this relevance, they are probably far from their zenith yet due to the ever increasing demand of contents to be stored in and distributed by the cloud, the need of computing power or the larger and larger amounts of data being analyzed by top companies such as Google, Microsoft or Amazon. However, everything is not always a bed of roses. Having a data center entails two major issues: they are terribly expensive to build, and they consume huge amounts of power being, therefore, terribly expensive to maintain. For this reason, cutting down the cost of building and increasing the energy efficiency (and hence reducing the carbon footprint) of data centers has been one of the hottest research topics during the last years. In this thesis we propose different techniques that can have an impact in both the building and the maintenance costs of data centers of any size, from small scale to large flagship data centers. The first part of the thesis is devoted to structural issues. We start by analyzing the bisection (band)width of a topology, of product graphs in particular, a useful parameter to compare and choose among different data center topologies. In that same part we describe the problem of deploying the servers in a data center as a Multidimensional Arrangement Problem (MAP) and propose a heuristic to reduce the deployment and wiring costs. We target energy efficiency in data centers in the second part of the thesis. We first propose a method to reduce the energy consumption in the data center network: rate adaptation. Rate adaptation is based on the idea of energy proportionality and aims to consume power on network devices proportionally to the load on their links. Our analysis proves that just using rate adaptation we may achieve average energy savings in the order of a 30-40% and up to a 60% depending on the network topology. We continue by characterizing the power requirements of a data center server given that, in order to properly increase the energy efficiency of a data center, we first need to understand how energy is being consumed. We present an exhaustive empirical characterization of the power requirements of multiple components of data center servers, namely, the CPU, the disks, and the network card. To do so, we devise different experiments to stress these components, taking into account the multiple available frequencies as well as the fact that we are working with multicore servers. In these experiments, we measure their energy consumption and identify their optimal operational points. Our study proves that the curve that defines the minimal power consumption of the CPU, as a function of the load in Active Cycles Per Second (ACPS), is neither concave nor purely convex. Moreover, it definitively has a superlinear dependence on the load. We also validate the accuracy of the model derived from our characterization by running different Hadoop applications in diverse scenarios obtaining an error below 4:1% on average. The last topic we study is the Virtual Machine Assignment problem (VMA), i.e., optimizing how virtual machines (VMs) are assigned to physical machines (PMs) in data centers. Our optimization target is to minimize the power consumed by all the PMs when considering that power consumption depends superlinearly on the load. We study four different VMA problems, depending on whether the number of PMs and their capacity are bounded or not. We study their complexity and perform an offline and online analysis of these problems. The online analysis is complemented with simulations that show that the online algorithms we propose consume substantially less power than other state of the art assignment algorithms.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Joerg Widmer.- Secretario: José Manuel Moya Fernández.- Vocal: Shmuel Zak

    FSCL: Homogeneous programming, scheduling and execution on heterogeneous platforms

    Get PDF
    The last few years has seen activity towards programming models, languages and frameworks to address the increasingly wide range and broad availability of heterogeneous computing resources through raised programming abstraction and portability across different platforms. The effort spent in simplifying parallel programming across heterogeneous platforms is often outweighed by the need for low-level control over computation setup and execution and by performance opportunities that are missed due to the overhead introduced by the additional abstraction. Moreover, despite the ability to port parallel code across devices, each device is generally characterised by a restricted set of computations that it can execute outperforming the other devices in the system. The problem is therefore to schedule computations on increasingly popular multi-device heterogeneous platforms, helping to choose the best device among the available ones each time a computation has to execute. Our Ph.D. research investigates the possibilities to address the problem of programming and execution abstraction on heterogeneous platforms while helping to dynamically and transparently exploit the computing power of such platforms in a device-aware fashion

    Mango: A model-driven approach to engineering green Mobile Cloud Applications

    Get PDF
    With the resource constrained nature of mobile devices and the resource abundant offerings of the cloud, several promising optimisation techniques have been proposed by the green computing research community. Prominent techniques and unique methods have been developed to offload resource/computation intensive tasks from mobile devices to the cloud. Most of the existing offloading techniques can only be applied to legacy mobile applications as they are motivated by existing systems. Consequently, they are realised with custom runtimes which incur overhead on the application. Moreover, existing approaches which can be applied to the software development phase, are difficult to implement (based on manual process) and also fall short of overall (mobile to cloud) efficiency in software qualityattributes or awareness of full-tier (mobile to cloud) implications.To address the above issues, the thesis proposes a model-driven architecturefor integration of software quality with green optimisation in Mobile Cloud Applications (MCAs), abbreviated as Mango architecture. The core aim of the architecture is to present an approach which easily integrates software quality attributes (SQAs) with the green optimisation objective of Mobile Cloud Computing (MCC). Also, as MCA is an application domain which spans through the mobile and cloud tiers; the Mango architecture, therefore, takesinto account the specification of SQAs across the mobile and cloud tiers, for overall efficiency. Furthermore, as a model-driven architecture, models can be built for computation intensive tasks and their SQAs, which in turn drives the development – for development efficiency. Thus, a modelling framework (called Mosaic) and a full-tier test framework (called Beftigre) were proposed to automate the architecture derivation and demonstrate the efficiency of Mango approach. By use of real world scenarios/applications, Mango has been demonstrated to enhance the MCA development process while achieving overall efficiency in terms of SQAs (including mobile performance and energy usage compared to existing counterparts)
    corecore