15,826 research outputs found

    Metaheuristic approaches to virtual machine placement in cloud computing: a review

    Get PDF

    Glowworm swarm optimisation algorithm for virtual machine placement in cloud computing

    Get PDF

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Migration energy aware reconfigurations of virtual network function instances in NFV architectures

    Get PDF
    Network function virtualization (NFV) is a new network architecture framework that implements network functions in software running on a pool of shared commodity servers. NFV can provide the infrastructure flexibility and agility needed to successfully compete in today's evolving communications landscape. Any service is represented by a service function chain (SFC) that is a set of VNFs to be executed according to a given order. The running of VNFs needs the instantiation of VNF instances (VNFIs) that are software modules executed on virtual machines. This paper deals with the migration problem of the VNFIs needed in the low traffic periods to turn OFF servers and consequently to save energy consumption. Though the consolidation allows for energy saving, it has also negative effects as the quality of service degradation or the energy consumption needed for moving the memories associated to the VNFI to be migrated. We focus on cold migration in which virtual machines are redundant and suspended before performing migration. We propose a migration policy that determines when and where to migrate VNFI in response to changes to SFC request intensity. The objective is to minimize the total energy consumption given by the sum of the consolidation and migration energies. We formulate the energy aware VNFI migration problem and after proving that it is NP-hard, we propose a heuristic based on the Viterbi algorithm able to determine the migration policy with low computational complexity. The results obtained by the proposed heuristic show how the introduced policy allows for a reduction of the migration energy and consequently lower total energy consumption with respect to the traditional policies. The energy saving can be on the order of 40% with respect to a policy in which migration is not performed

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa
    corecore