5,997 research outputs found

    Optimal VM placement in data centres with architectural and resource constraints

    Full text link
    Recent advance in virtualisation technology enables service provisioning in a flexible way by consolidating several virtual machines (VMs) into a single physical machine (PM). The inter-VM communications are inevitable when a group of VMs in a data centre provide services in a collaborative manner. With the increasing demands of such intra-data-centre traffics, it becomes essential to study the VM-to-PM placement such that the aggregated communication cost within a data centre is minimised. Such optimisation problem is proved NP-hard and formulated as an integer programming with quadratic constraints in this paper. Different from existing work, our formulation takes into consideration of data-centre architecture, inter-VM traffic pattern, and resource capacity of PMs. Furthermore, a heuristic algorithm is proposed and its high efficiency is extensively validated

    Network-aware virtual machine placement in cloud data centers with multiple traffic-intensive components

    Get PDF
    Following a shift from computing as a purchasable product to computing as a deliverable service to consumers over the Internet, cloud computing has emerged as a novel paradigm with an unprecedented success in turning utility computing into a reality. Like any emerging technology, with its advent, it also brought new challenges to be addressed. This work studies network and traffic aware virtual machine (VM) placement in a special cloud computing scenario from a provider's perspective, where certain infrastructure components have a predisposition to be the endpoints of a large number of intensive flows whose other endpoints are VMs located in physical machines (PMs). In the scenarios of interest, the performance of any VM is strictly dependent on the infrastructure's ability to meet their intensive traffic demands. We first introduce and attempt to maximize the total value of a metric named "satisfaction" that reflects the performance of a VM when placed on a particular PM. The problem of finding a perfect assignment for a set of given VMs is NP-hard and there is no polynomial time algorithm that can yield optimal solutions for large problems. Therefore, we introduce several off-line heuristic-based algorithms that yield nearly optimal solutions given the communication pattern and flow demand profiles of subject VMs. With extensive simulation experiments we evaluate and compare the effectiveness of our proposed algorithms against each other and also against naïve approaches. © 2015 Elsevier B.V.All rights reserved

    Network-aware virtual machine placement in cloud data centers with multiple traffic-intensive components

    Get PDF
    Ankara : The Department of Computer Engineering and The Graduate School of Engineering and Science of Bilkent University, 2014.Thesis (Master's) -- Bilkent University, 2014.Includes bibliographical references leaves 66-72.Following a shift from computing as a purchasable product to computing as a deliverable service to the consumers over the Internet, Cloud Computing emerged as a novel paradigm with an unprecedented success in turning utility computing into a reality. Like any emerging technology, with its advent, Cloud Computing also brought new challenges to be addressed. This work studies network and traffic aware virtual machine (VM) placement in Cloud Computing infrastructures from a provider perspective, where certain infrastructure components have a predisposition to be the sinks or sources of a large number of intensive-traffic flows initiated or targeted by VMs. In the scenarios of interest, the performance of VMs are strictly dependent on the infrastructure’s ability to meet their intensive traffic demands. We first introduce and attempt to maximize the total value of a metric named “satisfaction” that reflects the performance of a VM when placed on a particular physical machine (PM). The problem is NP-hard and there is no polynomial time algorithm that yields an optimal solution. Therefore we introduce several off-line heuristics-based algorithms that yield nearly optimal solutions given the communication pattern and flow demand profiles of VMs. We evaluate and compare the performance of our proposed algorithms via extensive simulation experiments.Ilkhechi, Amir RahimzadehM.S

    A study on performance measures for auto-scaling CPU-intensive containerized applications

    Get PDF
    Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented

    Research challenges on energy-efficient networking design

    Get PDF
    The networking research community has started looking into key questions on energy efficiency of communication networks. The European Commission activated under the FP7 the TREND Network of Excellence with the goal of establishing the integration of the EU research community in green networking with a long perspective to consolidate the European leadership in the field. TREND integrates the activities of major European players in networking, including manufacturers, operators, research centers, to quantitatively assess the energy demand of current and future telecom infrastructures, and to design energy-efficient, scalable and sustainable future networks. This paper describes the main results of the TREND research community and concludes with a roadmap describing the next steps for standardization, regulation agencies and research in both academia and industry.The research leading to these results has received funding from the EU 7th Framework Programme (FP7/2007–2013) under Grant Agreement No. 257740 (NoE TREND)

    Climbing Up Cloud Nine: Performance Enhancement Techniques for Cloud Computing Environments

    Get PDF
    With the transformation of cloud computing technologies from an attractive trend to a business reality, the need is more pressing than ever for efficient cloud service management tools and techniques. As cloud technologies continue to mature, the service model, resource allocation methodologies, energy efficiency models and general service management schemes are not yet saturated. The burden of making this all tick perfectly falls on cloud providers. Surely, economy of scale revenues and leveraging existing infrastructure and giant workforce are there as positives, but it is far from straightforward operation from that point. Performance and service delivery will still depend on the providers’ algorithms and policies which affect all operational areas. With that in mind, this thesis tackles a set of the more critical challenges faced by cloud providers with the purpose of enhancing cloud service performance and saving on providers’ cost. This is done by exploring innovative resource allocation techniques and developing novel tools and methodologies in the context of cloud resource management, power efficiency, high availability and solution evaluation. Optimal and suboptimal solutions to the resource allocation problem in cloud data centers from both the computational and the network sides are proposed. Next, a deep dive into the energy efficiency challenge in cloud data centers is presented. Consolidation-based and non-consolidation-based solutions containing a novel dynamic virtual machine idleness prediction technique are proposed and evaluated. An investigation of the problem of simulating cloud environments follows. Available simulation solutions are comprehensively evaluated and a novel design framework for cloud simulators covering multiple variations of the problem is presented. Moreover, the challenge of evaluating cloud resource management solutions performance in terms of high availability is addressed. An extensive framework is introduced to design high availability-aware cloud simulators and a prominent cloud simulator (GreenCloud) is extended to implement it. Finally, real cloud application scenarios evaluation is demonstrated using the new tool. The primary argument made in this thesis is that the proposed resource allocation and simulation techniques can serve as basis for effective solutions that mitigate performance and cost challenges faced by cloud providers pertaining to resource utilization, energy efficiency, and client satisfaction
    • …
    corecore