294 research outputs found

    Knowledge-defined networking : a machine learning based approach for network and traffic modeling

    Get PDF
    The research community has considered in the past the application of Machine Learning (ML) techniques to control and operate networks. A notable example is the Knowledge Plane proposed by D.Clark et al. However, such techniques have not been extensively prototyped or deployed in the field yet. In this thesis, we explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of ML techniques in the context of network operation and control. We describe a new paradigm that accommodates and exploits SDN, NA and ML, and provide use-cases that illustrate its applicability and benefits. We also present some relevant use-cases, in which ML tools can be useful. We refer to this new paradigm as Knowledge-Defined Networking (KDN). In this context, ML can be used as a network modeling technique to build models that estimate the network performance. Network modeling is a central technique to many networking functions, for instance in the field of optimization. One of the objective of this thesis is to provide an answer to the following question: Can neural networks accurately model the performance of a computer network as a function of the input traffic?. In this thesis, we focus mainly on modeling the average delay, but also on estimating the jitter and the packets lost. For this, we assume the network as a black-box that has as input a traffic matrix and as output the desired performance matrix. Then we train different regressors, including deep neural networks, and evaluate its accuracy under different fundamental network characteristics: topology, size, traffic intensity and routing. Moreover, we also study the impact of having multiple traffic flows between each pair of nodes. We also explore the use of ML techniques in other network related fields. One relevant application is traffic forecasting. Accurate forecasting enables scaling up or down the resources to efficiently accommodate the load of traffic. Such models are typically based on traditional time series ARMA or ARIMA models. We propose a new methodology that results from the combination of an ARIMA model with an ANN. The Neural Network greatly improves the ARIMA estimation by modeling complex and nonlinear dependencies, particularly for outliers. In order to train the Neural Network and to improve the outliers estimation, we use external information: weather, events, holidays, etc. The main hypothesis is that network traffic depends on the behavior of the end-users, which in turn depend on external factors. We evaluate the accuracy of our methodology using real-world data from an egress Internet link of a campus network. The analysis shows that the model works remarkably well, outperforming standard ARIMA models. Another relevant application is in the Network Function Virtualization (NFV). The NFV paradigm makes networks more flexible by using Virtual Network Functions (VNF) instead of dedicated hardware. The main advantage is the flexibility offered by these virtual elements. However, the use of virtual nodes increases the difficulty of modeling such networks. This problem may be addressed by the use of ML techniques, to model or to control such networks. As a first step, we focus on the modeling of the performance of single VNFs as a function of the input traffic. In this thesis, we demonstrate that the CPU consumption of a VNF can be estimated only as a function of the input traffic characteristics.L'aplicació de tècniques d'aprenentatge automàtic (ML) pel control i operació de xarxes informàtiques ja s'ha plantejat anteriorment per la comunitat científica. Un exemple important és "Knowledge Plane", proposat per D. Clark et al. Tot i això, aquestes propostes no s'han utilitzat ni implementat mai en aquest camp. En aquesta tesi, explorem els motius que han fet impossible l'adopció fins al present, i que ara en permeten la implementació. El principal motiu és l'adopció de dos nous paradigmes: Software-Defined Networking (SDN) i Network Analytics (NA), que permeten la utilització de tècniques d'aprenentatge automàtic en el context de control i operació de xarxes informàtiques. En aquesta tesi, es descriu aquest paradigma, que aprofita les possibilitats ofertes per SDN, per NA i per ML, i s'expliquen aplicacions en el món de la informàtica i les comunicacions on l'aplicació d'aquestes tècniques poden ser molt beneficioses. Hem anomenat a aquest paradigma Knowledge-Defined Networking (KDN). En aquest context, una de les aplicacions de ML és el modelatge de xarxes informàtiques per estimar-ne el comportament. El modelatge de xarxes és un camp de recerca important el aquest camp, i que permet, per exemple, optimitzar-ne el seu rendiment. Un dels objectius de la tesi és respondre la següent pregunta: Pot una xarxa neuronal modelar de manera acurada el comportament d'una xarxa informàtica en funció del tràfic d'entrada? Aquesta tesi es centra principalment en el modelatge del retard mig (temps entre que s'envia i es rep un paquet). També s'estudia com varia aquest retard (jitter) i el nombre de paquets perduts. Per fer-ho, s'assumeix que la xarxa és totalment desconeguda i que només es coneix la matriu de tràfic d'entrada i la matriu de rendiment com a sortida. Es fan servir diferents tècniques de ML, com ara regressors lineals i xarxes neuronals, i se n'avalua la precisió per diferents xarxes i diferents configuracions de xarxa i tràfic. Finalment, també s'estudia l'impacte de tenir múltiples fluxos entre els parells de nodes. En la tesi, també s'explora l'ús de tècniques d¿aprenentatge automàtic en altres camps relacionats amb les xarxes informàtiques. Un cas rellevant és la predicció de tràfic. Una bona estimació del tràfic permet preveure la utilització dels diversos elements de la xarxa i optimitzar-ne el seu rendiment. Les tècniques tradicionals de predicció de tràfic es basen en tècniques de sèries temporals, com ara models ARMA o ARIMA. En aquesta tesis es proposa una nova metodologia que combina un model ARIMA amb una xarxa neuronal. La xarxa neuronal millora la predicció dels valors atípics, que tenen comportament complexos i no lineals. Per fer-ho, s'incorpora a l'anàlisi l'ús d'informació externa, com ara: informació meteorològica, esdeveniments, vacances, etc. La hipòtesi principal és que el tràfic de xarxes informàtiques depèn del comportament dels usuaris finals, que a la vegada depèn de factors externs. Per això, s'avalua la precisió de la metodologia presentada fent servir dades reals d'un enllaç de sortida de la xarxa d'un campus. S'observa que el model presentat funciona bé, superant la precisió de models ARIMA estàndards. Una altra aplicació important és en el camp de Network Function Virtualization (NFV). El paradigma de NFV fa les xarxes més flexibles gràcies a l'ús de Virtual Network Functions (VNF) en lloc de dispositius específics. L'avantatge principal és la flexibilitat que ofereixen aquests elements virtuals. Per contra, l'ús de nodes virtuals augmenta la dificultat de modelar aquestes xarxes. Aquest problema es pot estudiar també mitjançant tècniques d'aprenentatge automàtic, tant per modelar com per controlar la xarxa. Com a primer pas, aquesta tesi es centra en el modelatge del comportament de VNFs treballant soles en funció del tràfic que processen. Concretament, es demostra que el consum de CPU d'una VNF es pot estimar a partir a partir de diverses característiques del tràfic d'entrada.Postprint (published version

    Moving Target Defense based Secured Network Slicing System in the O-RAN Architecture

    Full text link
    The open radio access network (O-RAN) architecture's native virtualization and embedded intelligence facilitate RAN slicing and enable comprehensive end-to-end services in post-5G networks. However, any vulnerabilities could harm security. Therefore, artificial intelligence (AI) and machine learning (ML) security threats can even threaten O-RAN benefits. This paper proposes a novel approach to estimating the optimal number of predefined VNFs for each slice while addressing secure AI/ML methods for dynamic service admission control and power minimization in the O-RAN architecture. We solve this problem on two-time scales using mathematical methods for determining the predefined number of VNFs on a large time scale and the proximal policy optimization (PPO), a Deep Reinforcement Learning algorithm, for solving dynamic service admission control and power minimization for different slices on a small-time scale. To secure the ML system for O-RAN, we implement a moving target defense (MTD) strategy to prevent poisoning attacks by adding uncertainty to the system. Our experimental results show that the proposed PPO-based service admission control approach achieves an admission rate above 80\% and that the MTD strategy effectively strengthens the robustness of the PPO method against adversarial attacks.Comment: 6 page

    Management and orchestration of virtual network functions via deep reinforcement learning

    Get PDF
    Management and orchestration (MANO) of re-sources by virtual network functions (VNFs) represents one of thekey challenges towards a fully virtualized network architectureas envisaged by 5G standards. Current threshold-based policiesinefficiently over-provision network resources and under-utilizeavailable hardware, incurring high cost for network operators,and consequently, the users. In this work, we present a MANOalgorithm for VNFs allowing a central unit (CU) to learnto autonomously re-configure resources (processing power andstorage), deploy new VNF instances, or offload them to the cloud,depending on the network conditions, available pool of resources,and the VNF requirements, with the goal of minimizing a costfunction that takes into account the economical cost as wellas latency and the quality-of-service (QoS) experienced by theusers. First, we formulate the stochastic resource optimizationproblem as a parameterized action Markov decision process(PAMDP). Then, we propose a solution based on deep reinforce-ment learning (DRL). More precisely, we present a novel RLapproach, called parameterized action twin (PAT) deterministicpolicy gradient, which leverages anactor-critic architecturetolearn to provision resources to the VNFs in an online manner.Finally, we present numerical performance results, and map themto 5G key performance indicators (KPIs). To the best of ourknowledge, this is the first work that considers DRL for MANOof VNFs’ physical resources

    Dynamic Prioritization and Adaptive Scheduling using Deep Deterministic Policy Gradient for Deploying Microservice-based VNFs

    Full text link
    The Network Function Virtualization (NFV)-Resource Allocation (RA) problem is NP-Hard. Traditional deployment methods revealed the existence of a starvation problem, which the researchers failed to recognize. Basically, starvation here, means the longer waiting times and eventual rejection of low-priority services due to a 'time out'. The contribution of this work is threefold: a) explain the existence of the starvation problem in the existing methods and their drawbacks, b) introduce 'Adaptive Scheduling' (AdSch) which is an 'intelligent scheduling' scheme using a three-factor approach (priority, threshold waiting time, and reliability), which proves to be more reasonable than traditional methods solely based on priority, and c) a 'Dynamic Prioritization' (DyPr), allocation method is also proposed for unseen services and the importance of macro- and micro-level priority. We presented a zero-touch solution using Deep Deterministic Policy Gradient (DDPG) for adaptive scheduling and an online-Ridge Regression (RR) model for dynamic prioritization. The DDPG successfully identified the 'Beneficial and Starving' services, efficiently deploying twice as many low-priority services as others, reducing the starvation problem. Our online-RR model learns the pattern in less than 100 transitions, and the prediction model has an accuracy rate of more than 80%

    Advanced Scaling Methods for VNF deployment with Reinforcement Learning

    Full text link
    Network function virtualization (NFV) and software-defined network (SDN) have become emerging network paradigms, allowing virtualized network function (VNF) deployment at a low cost. Even though VNF deployment can be flexible, it is still challenging to optimize VNF deployment due to its high complexity. Several studies have approached the task as dynamic programming, e.g., integer linear programming (ILP). However, optimizing VNF deployment for highly complex networks remains a challenge. Alternatively, reinforcement learning (RL) based approaches have been proposed to optimize this task, especially to employ a scaling action-based method which can deploy VNFs within less computational time. However, the model architecture can be improved further to generalize to the different networking settings. In this paper, we propose an enhanced model which can be adapted to more general network settings. We adopt the improved GNN architecture and a few techniques to obtain a better node representation for the VNF deployment task. Furthermore, we apply a recently proposed RL method, phasic policy gradient (PPG), to leverage the shared representation of the service function chain (SFC) generation model from the value function. We evaluate the proposed method in various scenarios, achieving a better QoS with minimum resource utilization compared to the previous methods. Finally, as a qualitative evaluation, we analyze our proposed encoder's representation for the nodes, which shows a more disentangled representation.Comment: 27 page

    Virtual Network Function Embedding under Nodal Outage using Reinforcement Learning

    Get PDF

    Performance analysis of selected machine learning techniques for estimating resource requirements of virtual network functions (VNFs) in software defined networks

    Get PDF
    Rapid development in the field of computer networking is now demanding the application of Machine Learning (ML) techniques in the traditional settings to improve the efficiency and bring automation to these networks. The application of ML to existing networks brings a lot of challenges and use-cases. In this context, we investigate different ML techniques to estimate resource requirements of complex network entities such as Virtual Network Functions (VNFs) deployed in Software Defined Networks (SDN) environment. In particular, we focus on the resource requirements of the VNFs in terms of Central Processing Unit (CPU) consumption, when input traffic represented by features is processed by them. We propose supervised ML models, Multiple Linear Regression (MLR) and Support Vector Regression (SVR), which are compared and analyzed against state of the art and use Fitting Neural Networks (FNN), to answer the resource requirement problem for VNF. Our experiments demonstrated that the behavior of different VNFs can be learned in order to model their resource requirements. Finally, these models are compared and analyzed, in terms of the regression accuracy and Cumulative Distribution Function (CDF) of the percentage prediction error. In all the investigated cases, the ML models achieved a good prediction accuracy with the total error less than 10% for FNN, while the total error was less than 9% and 4% for MLR and SVR, respectively, which shows the effectiveness of ML in solving such problems. Furthermore, the results shows that SVR outperform MLR and FNN in almost all the considered scenarios, while MLR is marginally more accurate than FNN
    • …
    corecore