4,618 research outputs found

    Cicada: Predictive Guarantees for Cloud Network Bandwidth

    Get PDF
    In cloud-computing systems, network-bandwidth guarantees have been shown to improve predictability of application performance and cost. Most previous work on cloud-bandwidth guarantees has assumed that cloud tenants know what bandwidth guarantees they want. However, application bandwidth demands can be complex and time-varying, and many tenants might lack sufficient information to request a bandwidth guarantee that is well-matched to their needs. A tenant's lack of accurate knowledge about its future bandwidth demands can lead to over-provisioning (and thus reduced cost-efficiency) or under-provisioning (and thus poor user experience in latency-sensitive user-facing applications). We analyze traffic traces gathered over six months from an HP Cloud Services datacenter, finding that application bandwidth consumption is both time-varying and spatially inhomogeneous. This variability makes it hard to predict requirements. To solve this problem, we develop a prediction algorithm usable by a cloud provider to suggest an appropriate bandwidth guarantee to a tenant. The key idea in the prediction algorithm is to treat a set of previously observed traffic matrices as "experts" and learn online the best weighted linear combination of these experts to make its prediction. With tenant VM placement using these predictive guarantees, we find that the inter-rack network utilization in certain datacenter topologies can be more than doubled

    Offline and online power aware resource allocation algorithms with migration and delay constraints

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In order to handle advanced mobile broadband services and Internet of Things (IoT), future Internet and 5G networks are expected to leverage the use of network virtualization, be much faster, have greater capacities, provide lower latencies, and significantly be power efficient than current mobile technologies. Therefore, this paper proposes three power aware algorithms for offline, online, and migration applications, solving the resource allocation problem within the frameworks of network function virtualization (NFV) environments in fractions of a second. The proposed algorithms target minimizing the total costs and power consumptions in the physical network through sufficiently allocating the least physical resources to host the demands of the virtual network services, and put into saving mode all other not utilized physical components. Simulations and evaluations of the offline algorithm compared to the state-of-art resulted on lower total costs by 32%. In addition to that, the online algorithm was tested through four different experiments, and the results argued that the overall power consumption of the physical network was highly dependent on the demands’ lifetimes, and the strictness of the required end-to-end delay. Regarding migrations during online, the results concluded that the proposed algorithms would be most effective when applied for maintenance and emergency conditions.Peer ReviewedPreprin

    Performance Modeling in Predictable Cloud Computing

    Get PDF
    This paper deals with the problem of performance stability of software running in shared virtualized infrastructures. The focus is on the ability to build an abstract performance model of containerized application components, where real-time scheduling at the CPU level, along with traffic shaping at the networking level, are used to limit the temporal interferences among co-located workloads, so as to obtain a predictable distributed computing platform. A model for a simple client-server application running in containers is used as a case-study, where an extensive experimental validation of the model is conducted over a testbed running a modified OpenStack on top of a custom real-time CPU scheduler in the Linux kernel

    Allocation of Virtual Machines in Cloud Data Centers - A Survey of Problem Models and Optimization Algorithms

    Get PDF
    Data centers in public, private, and hybrid cloud settings make it possible to provision virtual machines (VMs) with unprecedented flexibility. However, purchasing, operating, and maintaining the underlying physical resources incurs significant monetary costs and also environmental impact. Therefore, cloud providers must optimize the usage of physical resources by a careful allocation of VMs to hosts, continuously balancing between the conflicting requirements on performance and operational costs. In recent years, several algorithms have been proposed for this important optimization problem. Unfortunately, the proposed approaches are hardly comparable because of subtle differences in the used problem models. This paper surveys the used problem formulations and optimization algorithms, highlighting their strengths and limitations, also pointing out the areas that need further research in the future
    corecore