323 research outputs found

    Joint Energy Efficient and QoS-aware Path Allocation and VNF Placement for Service Function Chaining

    Full text link
    Service Function Chaining (SFC) allows the forwarding of a traffic flow along a chain of Virtual Network Functions (VNFs, e.g., IDS, firewall, and NAT). Software Defined Networking (SDN) solutions can be used to support SFC reducing the management complexity and the operational costs. One of the most critical issues for the service and network providers is the reduction of energy consumption, which should be achieved without impact to the quality of services. In this paper, we propose a novel resource (re)allocation architecture which enables energy-aware SFC for SDN-based networks. To this end, we model the problems of VNF placement, allocation of VNFs to flows, and flow routing as optimization problems. Thereafter, heuristic algorithms are proposed for the different optimization problems, in order find near-optimal solutions in acceptable times. The performance of the proposed algorithms are numerically evaluated over a real-world topology and various network traffic patterns. The results confirm that the proposed heuristic algorithms provide near optimal solutions while their execution time is applicable for real-life networks.Comment: Extended version of submitted paper - v7 - July 201

    Impact of Processing-Resource Sharing on the Placement of Chained Virtual Network Functions

    Full text link
    Network Function Virtualization (NFV) provides higher flexibility for network operators and reduces the complexity in network service deployment. Using NFV, Virtual Network Functions (VNF) can be located in various network nodes and chained together in a Service Function Chain (SFC) to provide a specific service. Consolidating multiple VNFs in a smaller number of locations would allow decreasing capital expenditures. However, excessive consolidation of VNFs might cause additional latency penalties due to processing-resource sharing, and this is undesirable, as SFCs are bounded by service-specific latency requirements. In this paper, we identify two different types of penalties (referred as "costs") related to the processingresource sharing among multiple VNFs: the context switching costs and the upscaling costs. Context switching costs arise when multiple CPU processes (e.g., supporting different VNFs) share the same CPU and thus repeated loading/saving of their context is required. Upscaling costs are incurred by VNFs requiring multi-core implementations, since they suffer a penalty due to the load-balancing needs among CPU cores. These costs affect how the chained VNFs are placed in the network to meet the performance requirement of the SFCs. We evaluate their impact while considering SFCs with different bandwidth and latency requirements in a scenario of VNF consolidation.Comment: Accepted for publication in IEEE Transactions on Cloud Computin

    Distributed VNF Scaling in Large-scale Datacenters: An ADMM-based Approach

    Full text link
    Network Functions Virtualization (NFV) is a promising network architecture where network functions are virtualized and decoupled from proprietary hardware. In modern datacenters, user network traffic requires a set of Virtual Network Functions (VNFs) as a service chain to process traffic demands. Traffic fluctuations in Large-scale DataCenters (LDCs) could result in overload and underload phenomena in service chains. In this paper, we propose a distributed approach based on Alternating Direction Method of Multipliers (ADMM) to jointly load balance the traffic and horizontally scale up and down VNFs in LDCs with minimum deployment and forwarding costs. Initially we formulate the targeted optimization problem as a Mixed Integer Linear Programming (MILP) model, which is NP-complete. Secondly, we relax it into two Linear Programming (LP) models to cope with over and underloaded service chains. In the case of small or medium size datacenters, LP models could be run in a central fashion with a low time complexity. However, in LDCs, increasing the number of LP variables results in additional time consumption in the central algorithm. To mitigate this, our study proposes a distributed approach based on ADMM. The effectiveness of the proposed mechanism is validated in different scenarios.Comment: IEEE International Conference on Communication Technology (ICCT), Chengdu, China, 201

    Hardware-accelerator aware VNF-chain recovery

    Get PDF
    Hardware-accelerators in Network Function Virtualization (NFV) environments have aided telecommunications companies (telcos) to reduce their expenditures by offloading compute-intensive VNFs to hardware-accelerators. To fully utilize the benefits of hardware-accelerators, VNF-chain recovery models need to be adapted. In this paper, we present an ILP model for optimizing prioritized recovery of VNF-chains in heterogeneous NFV environments following node failures. We also propose an accelerator-aware heuristic for solving prioritized VNF-chain recovery problems of large-size in a reasonable time. Evaluation results show that the performance of heuristic matches with that of ILP in regard to restoration of high and medium priority VNF-chains and a small penalty occurs only for low-priority VNF-chains

    Probabilistic QoS-aware Placement of VNF chains at the Edge

    Get PDF
    Deploying IoT-enabled Virtual Network Function (VNF) chains to Cloud-Edge infrastructures requires determining a placement for each VNF that satisfies all set deployment requirements as well as a software-defined routing of traffic flows between consecutive functions that meets all set communication requirements. In this article, we present a declarative solution, EdgeUsher, to the problem of how to best place VNF chains to Cloud-Edge infrastructures. EdgeUsher can determine all eligible placements for a set of VNF chains to a Cloud-Edge infrastructure so to satisfy all of their hardware, IoT, security, bandwidth, and latency requirements. It exploits probability distributions to model the dynamic variations in the available Cloud-Edge infrastructure, and to assess output eligible placements against those variations
    • …
    corecore