5,427 research outputs found

    Distributed VNF Scaling in Large-scale Datacenters: An ADMM-based Approach

    Full text link
    Network Functions Virtualization (NFV) is a promising network architecture where network functions are virtualized and decoupled from proprietary hardware. In modern datacenters, user network traffic requires a set of Virtual Network Functions (VNFs) as a service chain to process traffic demands. Traffic fluctuations in Large-scale DataCenters (LDCs) could result in overload and underload phenomena in service chains. In this paper, we propose a distributed approach based on Alternating Direction Method of Multipliers (ADMM) to jointly load balance the traffic and horizontally scale up and down VNFs in LDCs with minimum deployment and forwarding costs. Initially we formulate the targeted optimization problem as a Mixed Integer Linear Programming (MILP) model, which is NP-complete. Secondly, we relax it into two Linear Programming (LP) models to cope with over and underloaded service chains. In the case of small or medium size datacenters, LP models could be run in a central fashion with a low time complexity. However, in LDCs, increasing the number of LP variables results in additional time consumption in the central algorithm. To mitigate this, our study proposes a distributed approach based on ADMM. The effectiveness of the proposed mechanism is validated in different scenarios.Comment: IEEE International Conference on Communication Technology (ICCT), Chengdu, China, 201

    Learning Augmented Optimization for Network Softwarization in 5G

    Get PDF
    The rapid uptake of mobile devices and applications are posing unprecedented traffic burdens on the existing networking infrastructures. In order to maximize both user experience and investment return, the networking and communications systems are evolving to the next gen- eration – 5G, which is expected to support more flexibility, agility, and intelligence towards provisioned services and infrastructure management. Fulfilling these tasks is challenging, as nowadays networks are increasingly heterogeneous, dynamic and expanded with large sizes. Network softwarization is one of the critical enabling technologies to implement these requirements in 5G. In addition to these problems investigated in preliminary researches about this technology, many new emerging application requirements and advanced opti- mization & learning technologies are introducing more challenges & opportunities for its fully application in practical production environment. This motivates this thesis to develop a new learning augmented optimization technology, which merges both the advanced opti- mization and learning techniques to meet the distinct characteristics of the new application environment. To be more specific, the abstracts of the key contents in this thesis are listed as follows: • We first develop a stochastic solution to augment the optimization of the Network Function Virtualization (NFV) services in dynamical networks. In contrast to the dominant NFV solutions applied for the deterministic networking environments, the inherent network dynamics and uncertainties from 5G infrastructure are impeding the rollout of NFV in many emerging networking applications. Therefore, Chapter 3 investigates the issues of network utility degradation when implementing NFV in dynamical networks, and proposes a robust NFV solution with full respect to the underlying stochastic features. By exploiting the hierarchical decision structures in this problem, a distributed computing framework with two-level decomposition is designed to facilitate a distributed implementation of the proposed model in large-scale networks. • Next, Chapter 4 aims to intertwin the traditional optimization and learning technologies. In order to reap the merits of both optimization and learning technologies but avoid their limitations, promissing integrative approaches are investigated to combine the traditional optimization theories with advanced learning methods. Subsequently, an online optimization process is designed to learn the system dynamics for the network slicing problem, another critical challenge for network softwarization. Specifically, we first present a two-stage slicing optimization model with time-averaged constraints and objective to safeguard the network slicing operations in time-varying networks. Directly solving an off-line solution to this problem is intractable since the future system realizations are unknown before decisions. To address this, we combine the historical learning and Lyapunov stability theories, and develop a learning augmented online optimization approach. This facilitates the system to learn a safe slicing solution from both historical records and real-time observations. We prove that the proposed solution is always feasible and nearly optimal, up to a constant additive factor. Finally, simulation experiments are also provided to demonstrate the considerable improvement of the proposals. • The success of traditional solutions to optimizing the stochastic systems often requires solving a base optimization program repeatedly until convergence. For each iteration, the base program exhibits the same model structure, but only differing in their input data. Such properties of the stochastic optimization systems encourage the work of Chapter 5, in which we apply the latest deep learning technologies to abstract the core structures of an optimization model and then use the learned deep learning model to directly generate the solutions to the equivalent optimization model. In this respect, an encoder-decoder based learning model is developed in Chapter 5 to improve the optimization of network slices. In order to facilitate the solving of the constrained combinatorial optimization program in a deep learning manner, we design a problem-specific decoding process by integrating program constraints and problem context information into the training process. The deep learning model, once trained, can be used to directly generate the solution to any specific problem instance. This avoids the extensive computation in traditional approaches, which re-solve the whole combinatorial optimization problem for every instance from the scratch. With the help of the REINFORCE gradient estimator, the obtained deep learning model in the experiments achieves significantly reduced computation time and optimality loss

    Network Function Virtualization Service Delivery In Future Internet

    Get PDF
    This dissertation investigates the Network Function Virtualization (NFV) service delivery problems in the future Internet. With the emerging Internet of everything, 5G communication and multi-access edge computing techniques, tremendous end-user devices are connected to the Internet. The massive quantity of end-user devices facilitates various services between the end-user devices and the cloud/edge servers. To improve the service quality and agility, NFV is applied. In NFV, the customer\u27s data from these services will go through multiple Service Functions (SFs) for processing or analysis. Unlike traditional point-to-point data transmission, a particular set of SFs and customized service requirements are needed to be applied to the customer\u27s traffic flow, which makes the traditional point-to-point data transmission methods not directly used. As the traditional point-to-point data transmission methods cannot be directly applied, there should be a body of novel mechanisms that effectively deliver the NFV services with customized~requirements. As a result, this dissertation proposes a series of mechanisms for delivering NFV services with diverse requirements. First, we study how to deliver the traditional NFV service with a provable boundary in unique function networks. Secondly, considering both forward and backward traffic, we investigate how to effectively deliver the NFV service when the SFs required in forward and backward traffic is not the same. Thirdly, we investigate how to efficiently deliver the NFV service when the required SFs have specific executing order constraints. We also provide detailed analysis and discussion for proposed mechanisms and validate their performance via extensive simulations. The results demonstrate that the proposed mechanisms can efficiently and effectively deliver the NFV services under different requirements and networking conditions. At last, we also propose two future research topics for further investigation. The first topic focuses on parallelism-aware service function chaining and embedding. The second topic investigates the survivability of NFV services

    Graph-based feature enrichment for online intrusion detection in virtual networks

    Get PDF
    The increasing number of connected devices to provide the required ubiquitousness of Internet of Things paves the way for distributed network attacks at an unprecedented scale. Graph theory, strengthened by machine learning techniques, improves an automatic discovery of group behavior patterns of network threats often omitted by traditional security systems. Furthermore, Network Function Virtualization is an emergent technology that accelerates the provisioning of on-demand security function chains tailored to an application. Therefore, repeatable compliance tests and performance comparison of such function chains are mandatory. The contributions of this dissertation are divided in two parts. First, we propose an intrusion detection system for online threat detection enriched by a graph-learning analysis. We develop a feature enrichment algorithm that infers metrics from a graph analysis. By using different machine learning techniques, we evaluated our algorithm for three network traffic datasets. We show that the proposed graph-based enrichment improves the threat detection accuracy up to 15.7% and significantly reduces the false positives rate. Second, we aim to evaluate intrusion detection systems deployed as virtual network functions. Therefore, we propose and develop SFCPerf, a framework for an automatic performance evaluation of service function chaining. To demonstrate SFCPerf functionality, we design and implement a prototype of a security service function chain, composed of our intrusion detection system and a firewall. We show the results of a SFCPerf experiment that evaluates the chain prototype on top of the open platform for network function virtualization (OPNFV).O crescente número de dispositivos IoT conectados contribui para a ocorrência de ataques distribuídos de negação de serviço a uma escala sem precedentes. A Teoria de Grafos, reforçada por técnicas de aprendizado de máquina, melhora a descoberta automática de padrões de comportamento de grupos de ameaças de rede, muitas vezes omitidas pelos sistemas tradicionais de segurança. Nesse sentido, a virtualização da função de rede é uma tecnologia emergente que pode acelerar o provisionamento de cadeias de funções de segurança sob demanda para uma aplicação. Portanto, a repetição de testes de conformidade e a comparação de desempenho de tais cadeias de funções são obrigatórios. As contribuições desta dissertação são separadas em duas partes. Primeiro, é proposto um sistema de detecção de intrusão que utiliza um enriquecimento baseado em grafos para aprimorar a detecção de ameaças online. Um algoritmo de enriquecimento de características é desenvolvido e avaliado através de diferentes técnicas de aprendizado de máquina. Os resultados mostram que o enriquecimento baseado em grafos melhora a acurácia da detecção de ameaças até 15,7 % e reduz significativamente o número de falsos positivos. Em seguida, para avaliar sistemas de detecção de intrusões implantados como funções virtuais de rede, este trabalho propõe e desenvolve o SFCPerf, um framework para avaliação automática de desempenho do encadeamento de funções de rede. Para demonstrar a funcionalidade do SFCPerf, ´e implementado e avaliado um protótipo de uma cadeia de funções de rede de segurança, composta por um sistema de detecção de intrusão (IDS) e um firewall sobre a plataforma aberta para virtualização de função de rede (OPNFV)

    Online Service Provisioning in NFV-enabled Networks Using Deep Reinforcement Learning

    Get PDF
    In this paper, we study a Deep Reinforcement Learning (DRL) based framework for an online end-user service provisioning in a Network Function Virtualization (NFV)-enabled network. We formulate an optimization problem aiming to minimize the cost of network resource utilization. The main challenge is provisioning the online service requests by fulfilling their Quality of Service (QoS) under limited resource availability. Moreover, fulfilling the stochastic service requests in a large network is another challenge that is evaluated in this paper. To solve the formulated optimization problem in an efficient and intelligent manner, we propose a Deep Q-Network for Adaptive Resource allocation (DQN-AR) in NFV-enable network for function placement and dynamic routing which considers the available network resources as DQN states. Moreover, the service's characteristics, including the service life time and number of the arrival requests, are modeled by the Uniform and Exponential distribution, respectively. In addition, we evaluate the computational complexity of the proposed method. Numerical results carried out for different ranges of parameters reveal the effectiveness of our framework. In specific, the obtained results show that the average number of admitted requests of the network increases by 7 up to 14% and the network utilization cost decreases by 5 and 20 %

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    Exploring Path Computation Techniques in Software-Defined Networking: A Review and Performance Evaluation of Centralized, Distributed, and Hybrid Approaches

    Get PDF
    Software-Defined Networking (SDN) is a networking paradigm that allows network administrators to dynamically manage network traffic flows and optimize network performance. One of the key benefits of SDN is the ability to compute and direct traffic along efficient paths through the network. In recent years, researchers have proposed various SDN-based path computation techniques to improve network performance and reduce congestion. This review paper provides a comprehensive overview of SDN-based path computation techniques, including both centralized and distributed approaches. We discuss the advantages and limitations of each approach and provide a critical analysis of the existing literature. In particular, we focus on recent advances in SDN-based path computation techniques, including Dynamic Shortest Path (DSP), Distributed Flow-Aware Path Computation (DFAPC), and Hybrid Path Computation (HPC). We evaluate three SDN-based path computation algorithms: centralized, distributed, and hybrid, focusing on optimal path determination for network nodes. Test scenarios with random graph simulations are used to compare their performance. The centralized algorithm employs global network knowledge, the distributed algorithm relies on local information, and the hybrid approach combines both. Experimental results demonstrate the hybrid algorithm's superiority in minimizing path costs, striking a balance between optimization and efficiency. The centralized algorithm ranks second, while the distributed algorithm incurs higher costs due to limited local knowledge. This research offers insights into efficient path computation and informs future SDN advancements. We also discuss the challenges associated with implementing SDN-based path computation techniques, including scalability, security, and interoperability. Furthermore, we highlight the potential applications of SDN-based path computation techniques in various domains, including data center networks, wireless networks, and the Internet of Things (IoT). Finally, we conclude that SDN-based path computation techniques have the potential to significantly improvement in-order to improve network performance and reduce congestion. However, further research is needed to evaluate the effectiveness of these techniques under different network conditions and traffic patterns. With the rapid growth of SDN technology, we expect to see continued development and refinement of SDN-based path computation techniques in the future
    • …
    corecore