4 research outputs found

    Energyefficient virtual machine live migration in cloud data centers

    Get PDF
    Abstract Cloud computing services will play an important role to meet various requirements of the clients in daily lives. In cloud computing, virtualization is an important issue to minimize cost incurred to manage data centers across the world. The energy consumption has become the reason for higher cost in operating data centers. Savings can be achieved by continuous consolidation with live migration of VMs depending upon the utilization of the resources, virtual network topologies and thermal state of computing nodes. This paper presents a review of research work done by researchers based on energy-aware Virtual Machine live migration from one host to another in cloud data centers and highlighting its key concepts with research challenges. Keywords Virtual Machines (VMs), Live Migration, Energy Overhead, Data Center I. Introduction Cloud computing is gaining importance day-by-day. The large number enterprises and individuals are shifting opting for cloud computing services. Thousands of servers have been employed worldwide to cater to the needs of customers for computing services by big organisations like Amazon, Microsoft, IBM and Google. The round-the-clock reliable computational services, fault tolerance and information security are the main issues to be addressed while providing services to geographically spread customers site

    Topics in Power Usage in Network Services

    Get PDF
    The rapid advance of computing technology has created a world powered by millions of computers. Often these computers are idly consuming energy unnecessarily in spite of all the efforts of hardware manufacturers. This thesis examines proposals to determine when to power down computers without negatively impacting on the service they are used to deliver, compares and contrasts the efficiency of virtualisation with containerisation, and investigates the energy efficiency of the popular cryptocurrency Bitcoin. We begin by examining the current corpus of literature and defining the key terms we need to proceed. Then we propose a technique for improving the energy consumption of servers by moving them into a sleep state and employing a low powered device to act as a proxy in its place. After this we move on to investigate the energy efficiency of virtualisation and compare the energy efficiency of two of the most common means used to do this. Moving on from this we look at the cryptocurrency Bitcoin. We consider the energy consumption of bitcoin mining and if this compared with the value of bitcoin makes this profitable. Finally we conclude by summarising the results and findings of this thesis. This work increases our understanding of some of the challenges of energy efficient computation as well as proposing novel mechanisms to save energy

    Vermeidung von Interferenzen bei der Konsolidierung von Diensten auf zeitlich geteilten Ressourcen

    Get PDF
    Der steigende Bedarf an Internettraffic, Speicher und Verarbeitung benötigt immer mehr Hardwareressourcen. Zusätzlich überdimensionieren Datenzentrumbetreiber ihre Infrastruktur, um auch bei Bedarfsspitzen hinreichend Leistung zur Verfügung stellen zu können. Das führt zu einer geringen Ressourcenauslastung und damit zu einem erhöhten Energieverbrauch. Durch Konsolidierung der aktiven Dienste auf einer Teilmenge der physischen Server zu Zeiten geringer Auslastung können zum einen nicht benötigte Server ausgeschaltet werden und zum anderen sind die verbleibenden Server besser ausgelastet. Jedoch müssen sich Dienste nach der Konsolidierung die physischen Ressourcen mit anderen Diensten teilen. Durch Wechselwirkungen auf gemeinsam genutzten Ressourcen, sogenannten Interferenzen, verschlechtert sich die Performanz der Dienste. In dieser Arbeit wird auf Interferenzen eingegangen, die aufgrund des zeitlich variierenden Ressourcenverbrauchs von Diensten entstehen. Am Beispiel von der Rechenzeit einzelner Prozessorkerne wird mit Hilfe des Cutting Stock Problems mit nichtdeterministischen Längen (ND-CSP) der Energieverbrauch durch die Zahl der benötigten Ressourcen um bis zu 64,1% gesenkt. Durch Berücksichtigung der zeitlichen Variation des Ressourcenverbrauchs verbessert sich die Performanz um bis zu 59,6% gegenüber anderen Konsolidierungsstrategien. Außerdem wird das Konzept des Überlappungskoeffizienten eingeführt. Dieser beschreibt die probabilistische Beziehung zweier parallel laufender Dienste, inwiefern sie gleichzeitig aktiv sind. Sind Dienste nicht gleichzeitig aktiv, können sie ohne zu erwartende Interferenzen konsolidiert werden. Umgekehrt sollte die Konsolidierung gleichzeitig aktiver Dienste vermieden werden. Die Analyse eines Datenzentrums von Google zeigt, dass beide Szenarien einen signifikanten Anteil darstellen. Zur Berücksichtigung des Überlappungskoeffizienten wird das ND-CSP erweitert und näherungsweise gelöst. Hier zeigt sich jedoch weder eine Verbesserung noch eine Verschlechterung der Performanz der Dienste bei gleichem Energieverbrauch. Perspektivisch, bei der exakten Lösung und weiterer Optimierung, können aber damit Dienste eventuell so allokiert werden, dass ihre Interferenzen reduziert oder im Idealfall sogar weitgehend ausgeschlossen werden können.An increasing portion of IP traffic is processed and stored in data centers. However, data center providers tend to over-provision their resources. Therefore, underutilized resources unnecessarily waste energy. Consolidating services allows them to be executed within a subset of the entire data center and to turn off the unnecessary, idling machines. Additionally, the remaining machines are properly utilized and, hence, more energy-efficient. Nevertheless, this has to be balanced against degrading the quality of service due to the shared resources of the physical machines after the consolidation. This thesis focuses on the above mentioned interferences due to fluctuating workloads. These are treated in the framework of the Cutting Stock Problem, where items with non-deterministic length are introduced. This reduces the power consumption by minimizing the necessary, active resources by up to 64.1% for the exemplary CPU time of individual cores. Thanks to the awareness of workload fluctuations, it improves the performance of services by up to 59.6% compared to other allocation schemes. Additionally, the concept of 'overlap coefficients' is introduced, which describes the probabilistic relation between two services which run in parallel. The more often these services are active at the same time the higher the overlap coefficient and vice versa. Services which are not active at the same time can be consolidated without any expected interference effects, while these with common activity periods should not be consolidated. The analysis of one of Google's data centers unveils that most of the services can be mapped onto one of the two patterns, while few with undetermined relation remain. The ND-CSP is extended by the 'overlap coefficient' and approximatively solved. In contrast to the former ND-CSP, neither an improvement nor a deterioration of the consolidation results is obtained. In the future, the services can be allocated with reduced or even without interference effects if an exact solution or a multi-objective optimization is applied

    Investigation into the Energy Cost of Live Migration of Virtual Machines

    No full text
    Abstract—One of the mechanisms to achieve energy efficiency in virtualized environments is to consolidate the workload (virtual machines) of underutilized servers and to switch-off these servers all together. Similarly,the workloads of overloaded servers can be distributed onto other servers for a load balancing reason. Central to this approach is the migration of virtual machines at runtime,which may introduce its own overhead in terms of energy consumption and service execution latency. This paper experimentally investigates the magnitude of this overhead. We use the Kernel-based Virtual Machine (KVM) hypervisor and a custom-made benchmark for our experiments. We will demonstrate that the workload of a virtual machine does not have any bearing on the power consumption of the destination server during migration but it has on the source server. Moreover,the available network bandwidth and the size of the virtual machine do indeed introduce a non-negligible energy overhead and migration latency on both the source and the destination server. Index Terms—virtual machine,live virtual machine migration,migration time,migration cost,power consumption,energy overhead,workload types,energyefficient computing. I
    corecore