238 research outputs found

    Balancing the Migration of Virtual Network Functions with Replications in Data Centers

    Full text link
    The Network Function Virtualization (NFV) paradigm is enabling flexibility, programmability and implementation of traditional network functions into generic hardware, in form of the so-called Virtual Network Functions (VNFs). Today, cloud service providers use Virtual Machines (VMs) for the instantiation of VNFs in the data center (DC) networks. To instantiate multiple VNFs in a typical scenario of Service Function Chains (SFCs), many important objectives need to be met simultaneously, such as server load balancing, energy efficiency and service execution time. The well-known \emph{VNF placement} problem requires solutions that often consider \emph{migration} of virtual machines (VMs) to meet this objectives. Ongoing efforts, for instance, are making a strong case for migrations to minimize energy consumption, while showing that attention needs to be paid to the Quality of Service (QoS) due to service interruptions caused by migrations. To balance the server allocation strategies and QoS, we propose using \emph{replications} of VNFs to reduce migrations in DC networks. We propose a Linear Programming (LP) model to study a trade-off between replications, which while beneficial to QoS require additional server resources, and migrations, which while beneficial to server load management can adversely impact the QoS. The results show that, for a given objective, the replications can reduce the number of migrations and can also enable a better server and data center network load balancing

    Scalable topological forwarding and routing policies in RINA-enabled programmable data centers

    Get PDF
    This is the peer reviewed version of the following article: Leon Gaixas S, Perelló J, Careglio D, Grasa E, López DR, Aranda PA. Scalable topological forwarding and routing policies in RINA-enabled programmable data centers. Trans Emerging Tel Tech. 2017;28:e3256, DOI 10.1002/ett.3256, which has been published in final form at DOI: 10.1002/ett.3256. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-ArchivingGiven the current expansion of cloud computing, the expected advent of the Internet of Things, and the requirements of future fifth-generation network infrastructures, significantly larger pools of computational and storage resources will soon be required. This emphasizes the need for more scalable data centers that are capable of providing such an amount of resources in a cost-effective way. A quick look into today's commercial data centers shows that they tend to rely on variations of well-defined leaf-spine/Clos data center network (DCN) topologies, offering low latency, ultrahigh bisectional bandwidth, and enhanced reliability against concurrent failures. However, DCNs are typically restricted by the use of the Transmission Control Protocol/Internet Protocol (TCP/IP) suite, thus suffering limited routing scalability. In this work, we study the benefits that replacing TCP/IP with the recursive internetwork architecture (RINA) can bring into commercial DCNs, focusing on forwarding and routing scalability. We quantitatively evaluate the benefits that RINA solutions can yield against those based on TCP/IP and highlight how, by deploying RINA, topological routing solutions can improve even more the efficiency of the network. To this goal, we propose a rule-and-exception forwarding policy tailored to the characteristics of several DCN variants, enabling fast forwarding decisions with merely neighbors' information. Upon failures, few exceptions are necessary, whose computation can also profit from the known topology. Extensive numerical results show that the proposed policy requirements depend mainly on the number of neighbors and concurrent failures in the DCN rather than its size, dramatically reducing the amount of forwarding and routing information stored at DCN nodes.Peer ReviewedPostprint (author's final draft

    Bayesian Adaptive Path Allocation Techniques for Intra-Datacenter Workloads

    Get PDF
    Data center networks (DCNs) are the backbone of many cloud and Internet services. They are vulnerable to link failures, that occur on a daily basis, with a high frequency. Service disruption due to link failure may incur financial losses, compliance breaches and reputation damage. Performance metrics such as packet loss and routing flaps are negatively affected by these failure events. We propose a new Bayesian learning approach towards adaptive path allocation that aims to improve DCN performance by reducing both packet loss and routing flaps ratios. The proposed approach incorporates historical information about link failure and usage probabilities into its allocation procedure, and updates this information on-the-fly during DCN operational time. We evaluate the proposed framework using an experimental platform built with the POX controller and the Mininet emulator. Compared with a benchmark shortest path algorithm, the results show that the proposed methods perform better in terms of reducing the packet loss and routing flaps

    On energy consumption of switch-centric data center networks

    Get PDF
    Data center network (DCN) is the core of cloud computing and accounts for 40% energy spend when compared to cooling system, power distribution and conversion of the whole data center (DC) facility. It is essential to reduce the energy consumption of DCN to esnure energy-efficient (green) data center can be achieved. An analysis of DC performance and efficiency emphasizing the effect of bandwidth provisioning and throughput on energy proportionality of two most common switch-centric DCN topologies: three-tier (3T) and fat tree (FT) based on the amount of actual energy that is turned into computing power are presented. Energy consumption of switch-centric DCNs by realistic simulations is analyzed using GreenCloud simulator. Power related metrics were derived and adapted for the information technology equipment (ITE) processes within the DCN. These metrics are acknowledged as subset of the major metrics of power usage effectiveness (PUE) and data center infrastructure efficiency (DCIE), known to DCs. This study suggests that despite in overall FT consumes more energy, it spends less energy for transmission of a single bit of information, outperforming 3T
    corecore