668 research outputs found

    Dynamic, Latency-Optimal vNF Placement at the Network Edge

    Get PDF
    Future networks are expected to support low-latency, context-aware and user-specific services in a highly flexible and efficient manner. One approach to support emerging use cases such as, e.g., virtual reality and in-network image processing is to introduce virtualized network functions (vNF)s at the edge of the network, placed in close proximity to the end users to reduce end-to-end latency, time-to-response, and unnecessary utilisation in the core network. While placement of vNFs has been studied before, it has so far mostly focused on reducing the utilisation of server resources (i.e., minimising the number of servers required in the network to run a specific set of vNFs), and not taking network conditions into consideration such as, e.g., end-to-end latency, the constantly changing network dynamics, or user mobility patterns. In this paper, we formulate the Edge vNF placement problem to allocate vNFs to a distributed edge infrastructure, minimising end-to-end latency from all users to their associated vNFs. We present a way to dynamically re-schedule the optimal placement of vNFs based on temporal network-wide latency fluctuations using optimal stopping theory. We then evaluate our dynamic scheduler over a simulated nation-wide backbone network using real-world ISP latency characteristics. We show that our proposed dynamic placement scheduler minimises vNF migrations compared to other schedulers (e.g., periodic and always-on scheduling of a new placement), and offers Quality of Service guarantees by not exceeding a maximum number of latency violations that can be tolerated by certain applications

    Towards lightweight, low-latency network function virtualisation at the network edge

    Get PDF
    Communication networks are witnessing a dramatic growth in the number of connected mobile devices, sensors and the Internet of Everything (IoE) equipment, which have been estimated to exceed 50 billion by 2020, generating zettabytes of traffic each year. In addition, networks are stressed to serve the increased capabilities of the mobile devices (e.g., HD cameras) and to fulfil the users' desire for always-on, multimedia-oriented, and low-latency connectivity. To cope with these challenges, service providers are exploiting softwarised, cost-effective, and flexible service provisioning, known as Network Function Virtualisation (NFV). At the same time, future networks are aiming to push services to the edge of the network, to close physical proximity from the users, which has the potential to reduce end-to-end latency, while increasing the flexibility and agility of allocating resources. However, the heavy footprint of today's NFV platforms and their lack of dynamic, latency-optimal orchestration prevents them from being used at the edge of the network. In this thesis, the opportunities of bringing NFV to the network edge are identified. As a concrete solution, the thesis presents Glasgow Network Functions (GNF), a container-based NFV framework that allocates and dynamically orchestrates lightweight virtual network functions (vNFs) at the edge of the network, providing low-latency network services (e.g., security functions or content caches) to users. The thesis presents a powerful formalisation for the latency-optimal placement of edge vNFs and provides an exact solution using Integer Linear Programming, along with a placement scheduler that relies on Optimal Stopping Theory to efficiently re-calculate the placement following roaming users and temporal changes in latency characteristics. The results of this work demonstrate that GNF's real-world vNF examples can be created and hosted on a variety of hosting devices, including VMs from public clouds and low-cost edge devices typically found at the customer's premises. The results also show that GNF can carefully manage the placement of vNFs to provide low-latency guarantees, while minimising the number of vNF migrations required by the operators to keep the placement latency-optimal

    Optimal Orchestration of Virtual Network Functions

    Full text link
    -The emergence of Network Functions Virtualization (NFV) is bringing a set of novel algorithmic challenges in the operation of communication networks. NFV introduces volatility in the management of network functions, which can be dynamically orchestrated, i.e., placed, resized, etc. Virtual Network Functions (VNFs) can belong to VNF chains, where nodes in a chain can serve multiple demands coming from the network edges. In this paper, we formally define the VNF placement and routing (VNF-PR) problem, proposing a versatile linear programming formulation that is able to accommodate specific features and constraints of NFV infrastructures, and that is substantially different from existing virtual network embedding formulations in the state of the art. We also design a math-heuristic able to scale with multiple objectives and large instances. By extensive simulations, we draw conclusions on the trade-off achievable between classical traffic engineering (TE) and NFV infrastructure efficiency goals, evaluating both Internet access and Virtual Private Network (VPN) demands. We do also quantitatively compare the performance of our VNF-PR heuristic with the classical Virtual Network Embedding (VNE) approach proposed for NFV orchestration, showing the computational differences, and how our approach can provide a more stable and closer-to-optimum solution

    Impact of Processing-Resource Sharing on the Placement of Chained Virtual Network Functions

    Full text link
    Network Function Virtualization (NFV) provides higher flexibility for network operators and reduces the complexity in network service deployment. Using NFV, Virtual Network Functions (VNF) can be located in various network nodes and chained together in a Service Function Chain (SFC) to provide a specific service. Consolidating multiple VNFs in a smaller number of locations would allow decreasing capital expenditures. However, excessive consolidation of VNFs might cause additional latency penalties due to processing-resource sharing, and this is undesirable, as SFCs are bounded by service-specific latency requirements. In this paper, we identify two different types of penalties (referred as "costs") related to the processingresource sharing among multiple VNFs: the context switching costs and the upscaling costs. Context switching costs arise when multiple CPU processes (e.g., supporting different VNFs) share the same CPU and thus repeated loading/saving of their context is required. Upscaling costs are incurred by VNFs requiring multi-core implementations, since they suffer a penalty due to the load-balancing needs among CPU cores. These costs affect how the chained VNFs are placed in the network to meet the performance requirement of the SFCs. We evaluate their impact while considering SFCs with different bandwidth and latency requirements in a scenario of VNF consolidation.Comment: Accepted for publication in IEEE Transactions on Cloud Computin

    Virtual-Mobile-Core Placement for Metro Network

    Full text link
    Traditional highly-centralized mobile core networks (e.g., Evolved Packet Core (EPC)) need to be constantly upgraded both in their network functions and backhaul links, to meet increasing traffic demands. Network Function Virtualization (NFV) is being investigated as a potential cost-effective solution for this upgrade. A virtual mobile core (here, virtual EPC, vEPC) provides deployment flexibility and scalability while reducing costs, network-resource consumption and application delay. Moreover, a distributed deployment of vEPC is essential for emerging paradigms like Multi-Access Edge Computing (MEC). In this work, we show that significant reduction in networkresource consumption can be achieved as a result of optimal placement of vEPC functions in metro area. Further, we show that not all vEPC functions need to be distributed. In our study, for the first time, we account for vEPC interactions in both data and control planes (Non-Access Stratum (NAS) signaling procedure Service Chains (SCs) with application latency requirements) using a detailed mathematical model

    Specifying and Placing Chains of Virtual Network Functions

    Full text link
    Network appliances perform different functions on network flows and constitute an important part of an operator's network. Normally, a set of chained network functions process network flows. Following the trend of virtualization of networks, virtualization of the network functions has also become a topic of interest. We define a model for formalizing the chaining of network functions using a context-free language. We process deployment requests and construct virtual network function graphs that can be mapped to the network. We describe the mapping as a Mixed Integer Quadratically Constrained Program (MIQCP) for finding the placement of the network functions and chaining them together considering the limited network resources and requirements of the functions. We have performed a Pareto set analysis to investigate the possible trade-offs between different optimization objectives
    corecore