89,300 research outputs found

    Dynamic, Latency-Optimal vNF Placement at the Network Edge

    Get PDF
    Future networks are expected to support low-latency, context-aware and user-specific services in a highly flexible and efficient manner. One approach to support emerging use cases such as, e.g., virtual reality and in-network image processing is to introduce virtualized network functions (vNF)s at the edge of the network, placed in close proximity to the end users to reduce end-to-end latency, time-to-response, and unnecessary utilisation in the core network. While placement of vNFs has been studied before, it has so far mostly focused on reducing the utilisation of server resources (i.e., minimising the number of servers required in the network to run a specific set of vNFs), and not taking network conditions into consideration such as, e.g., end-to-end latency, the constantly changing network dynamics, or user mobility patterns. In this paper, we formulate the Edge vNF placement problem to allocate vNFs to a distributed edge infrastructure, minimising end-to-end latency from all users to their associated vNFs. We present a way to dynamically re-schedule the optimal placement of vNFs based on temporal network-wide latency fluctuations using optimal stopping theory. We then evaluate our dynamic scheduler over a simulated nation-wide backbone network using real-world ISP latency characteristics. We show that our proposed dynamic placement scheduler minimises vNF migrations compared to other schedulers (e.g., periodic and always-on scheduling of a new placement), and offers Quality of Service guarantees by not exceeding a maximum number of latency violations that can be tolerated by certain applications

    An online algorithm for dynamic NFV placement in cloud-based autonomous response networks

    Get PDF
    Autonomous response networks are becoming a reality thanks to recent advances in cloud computing, Network Function Virtualization (NFV) and Software-Defined Networking (SDN) technologies. These enhanced networks fully enable autonomous real-time management of virtualized infrastructures. In this context, one of the major challenges is how virtualized network resources can be effectively placed. Although this issue has been addressed before in cloud-based environments, it is not yet completely resolved for the online placement of virtual machines. For such a purpose, this paper proposes an online heuristic algorithm called Topology-Aware Placement of Virtual Network Functions (TAP-VNF) as a low-complexity solution for such dynamic infrastructures. As a complement, we provide a general formulation of the network function placement using the service function chaining concept. Furthermore, two metrics called consolidation and aggregation validate the efficiency of the proposal in the experimental simulations. We have compared our approach with optimal solutions, in terms of consolidation and aggregation ratios, showing a more suitable performance for dynamic cloud-based environments. The obtained results show that TAP-VNF also outperforms existing approaches based on traditional bin packing schemes.Postprint (published version

    A service-oriented approach for dynamic chaining of virtual network functions over multi-provider software-defined networks

    Get PDF
    Emerging technologies such as Software-Defined Networks (SDN) and Network Function Virtualization (NFV) promise to address cost reduction and flexibility in network operation while enabling innovative network service delivery models. However, operational network service delivery solutions still need to be developed that actually exploit these technologies, especially at the multi-provider level. Indeed, the implementation of network functions as software running over a virtualized infrastructure and provisioned on a service basis let one envisage an ecosystem of network services that are dynamically and flexibly assembled by orchestrating Virtual Network Functions even across different provider domains, thereby coping with changeable user and service requirements and context conditions. In this paper we propose an approach that adopts Service-Oriented Architecture (SOA) technology-agnostic architectural guidelines in the design of a solution for orchestrating and dynamically chaining Virtual Network Functions. We discuss how SOA, NFV, and SDN may complement each other in realizing dynamic network function chaining through service composition specification, service selection, service delivery, and placement tasks. Then, we describe the architecture of a SOA-inspired NFV orchestrator, which leverages SDN-based network control capabilities to address an effective delivery of elastic chains of Virtual Network Functions. Preliminary results of prototype implementation and testing activities are also presented. The benefits for Network Service Providers are also described that derive from the adaptive network service provisioning in a multi-provider environment through the orchestration of computing and networking services to provide end users with an enhanced service experience

    The P-ART framework for placement of virtual network services in a multi-cloud environment

    Get PDF
    Carriers’ network services are distributed, dynamic, and investment intensive. Deploying them as virtual network services (VNS) brings the promise of low-cost agile deployments, which reduce time to market new services. If these virtual services are hosted dynamically over multiple clouds, greater flexibility in optimizing performance and cost can be achieved. On the flip side, when orchestrated over multiple clouds, the stringent performance norms for carrier services become difficult to meet, necessitating novel and innovative placement strategies. In selecting the appropriate combination of clouds for placement, it is important to look ahead and visualize the environment that will exist at the time a virtual network service is actually activated. This serves multiple purposes — clouds can be selected to optimize the cost, the chosen performance parameters can be kept within the defined limits, and the speed of placement can be increased. In this paper, we propose the P-ART (Predictive-Adaptive Real Time) framework that relies on predictive-deductive features to achieve these objectives. With so much riding on predictions, we include in our framework a novel concept-drift compensation technique to make the predictions closer to reality by taking care of long-term traffic variations. At the same time, near real-time update of the prediction models takes care of sudden short-term variations. These predictions are then used by a new randomized placement heuristic that carries out a fast cloud selection using a least-cost latency-constrained policy. An empirical analysis carried out using datasets from a queuing-theoretic model and also through implementation on CloudLab, proves the effectiveness of the P-ART framework. The placement system works fast, placing thousands of functions in a sub-minute time frame with a high acceptance ratio, making it suitable for dynamic placement. We expect the framework to be an important step in making the deployment of carrier-grade VNS on multi-cloud systems, using network function virtualization (NFV), a reality

    Leveraging NFV heterogeneity at the network edge

    Get PDF
    With network function virtualisation (NFV) and network programmability, network functions (NFs) such as firewalls, traffic load balancers, content filters, and intrusion detection systems (IDS) are virtualized and either instantiated on user space hosts using virtual machines (VMs), lightweight containers, or in the network data plane using programmable switching technology such as P4 or offloaded onto Smart network interface cards (NICs) – often chained together to create a service function chain (SFC), based on defined service level agreement (SLA). The need to leverage heterogeneous programmable platforms to support the in-network acceleration of functions keeps growing as emerging use cases come with peculiar requirements. This thesis identifies various heterogeneous frameworks for deploying virtual network functions that network operators can leverage in service provider networks. A novel taxonomy that provides network operators and the wider research community valuable insights is proposed. The thesis presents the performance gains obtained from using heterogeneous frameworks for deploying virtual network functions using real testbeds. In addition, this thesis investigates the optimal placement of vNFs over the distributed edge network while considering the heterogeneity of packet processing elements. In particular, the work questions the status quo of how vNFs are currently being deployed, i.e., the lack of frameworks to support the seamless deployment of vNFs that are implemented on diverse packet processing platforms – leveraging the capability of the programmable network data plane. In response, the thesis presents a novel integer linear programming (ILP) model for the hybrid placement of diverse network functions that leverages the heterogeneity of the network data plane and the abundant processing capability of user space hosts, with the objective function of minimizing end-to-end latency for vNF placement. A novel hybrid placement heuristic algorithm, HYPHA, is also proposed to find a quick, efficient solution to the hybrid vNF placement problem. Using optimal stopping theory (OST) principles, an optimal placement scheduling model is presented to handle dynamic edge placement scenarios. The results in this work demonstrate that employing a hybrid deployment scheme that leverages the processing capability of the network data plane yields minimal user-tovNF latency and overall end-to-end latency while fulfilling the placement of a diverse set of user requests from emerging use cases to speed up service delivery by network operators. The results also show that network operators can leverage the high-speed, low-latency feature of data plane packet processing elements for hosting delay-sensitive applications and improving service delivery for subscribed users. It is shown that the proposed hybrid heuristic algorithm can obtain near-optimal vNF mapping while incurring fewer latency threshold violations set by network operators. Furthermore, in addition to emerging edge use cases, the placement solution presented in this thesis can be adapted to place network functions efficiently in core network infrastructure while leveraging the heterogeneity of servers. The dynamic placement scheduler also minimises the number of latency violations and vNF migrations between heterogeneous hosts based on SLAs set by network operators

    The P-ART framework for placement of virtual network services in a multi-cloud environment

    Get PDF
    Carriers network services are distributed, dynamic, and investment intensive. Deploying them as virtual network services (VNS) brings the promise of low-cost agile deployments, which reduce time to market new services. If these virtual services are hosted dynamically over multiple clouds, greater flexibility in optimizing performance and cost can be achieved. On the flip side, when orchestrated over multiple clouds, the stringent performance norms for carrier services become difficult to meet, necessitating novel and innovative placement strategies. In selecting the appropriate combination of clouds for placement, it is important to look ahead and visualize the environment that will exist at the time a virtual network service is actually activated. This serves multiple purposes clouds can be selected to optimize the cost, the chosen performance parameters can be kept within the defined limits, and the speed of placement can be increased. In this paper, we propose the P-ART (Predictive-Adaptive Real Time) framework that relies on predictive-deductive features to achieve these objectives. With so much riding on predictions, we include in our framework a novel concept-drift compensation technique to make the predictions closer to reality by taking care of long-term traffic variations. At the same time, near real-time update of the prediction models takes care of sudden short-term variations. These predictions are then used by a new randomized placement heuristic that carries out a fast cloud selection using a least-cost latency-constrained policy. An empirical analysis carried out using datasets from a queuing-theoretic model and also through implementation on CloudLab, proves the effectiveness of the P-ART framework. The placement system works fast, placing thousands of functions in a sub-minute time frame with a high acceptance ratio, making it suitable for dynamic placement. We expect the framework to be an important step in making the deployment of carrier-grade VNS on multi-cloud systems, using network function virtualization (NFV), a reality.This publication was made possible by NPRP grant # 8-634-1-131 from the Qatar National Research Fund (a member of Qatar Foundation), National Science Foundation, USA � CNS-1718929 and National Science Foundation, USA � CNS-1547380 .Scopu
    • …
    corecore