189 research outputs found

    Traffic-Profile and Machine Learning Based Regional Data Center Design and Operation for 5G Network

    Get PDF
    Data center in the fifth generation (5G) network will serve as a facilitator to move the wireless communication industry from a proprietary hardware based approach to a more software oriented environment. Techniques such as Software defined networking (SDN) and network function virtualization (NFV) would be able to deploy network functionalities such as service and packet gateways as software. These virtual functionalities however would require computational power from data centers. Therefore, these data centers need to be properly placed and carefully designed based on the volume of traffic they are meant to serve. In this work, we first divide the city of Milan, Italy into different zones using K-means clustering algorithm. We then analyse the traffic profiles of these zones in the city using a network operator’s Open Big Data set. We identify the optimal placement of data centers as a facility location problem and propose the use of Weiszfeld’s algorithm to solve it. Furthermore, based on our analysis of traffic profiles in different zones, we heuristically determine the ideal dimension of the data center in each zone. Additionally, to aid operation and facilitate dynamic utilization of data center resources, we use the state of the art recurrent neural network models to predict the future traffic demands according to past demand profiles of each area

    Mecanismos para controlo e gestão de redes 5G: redes de operador

    Get PDF
    In 5G networks, time-series data will be omnipresent for the monitoring of network metrics. With the increase in the number of Internet of Things (IoT) devices in the next years, it is expected that the number of real-time time-series data streams increases at a fast pace. To be able to monitor those streams, test and correlate different algorithms and metrics simultaneously and in a seamless way, time-series forecasting is becoming essential for the pro-active successful management of the network. The objective of this dissertation is to design, implement and test a prediction system in a communication network, that allows integrating various networks, such as a vehicular network and a 4G operator network, to improve the network reliability and Quality-of-Service (QoS). To do that, the dissertation has three main goals: (1) the analysis of different network datasets and implementation of different approaches to forecast network metrics, to test different techniques; (2) the design and implementation of a real-time distributed time-series forecasting architecture, to enable the network operator to make predictions about the network metrics; and lastly, (3) to use the forecasting models made previously and apply them to improve the network performance using resource management policies. The tests done with two different datasets, addressing the use cases of congestion management and resource splitting in a network with a limited number of resources, show that the network performance can be improved with proactive management made by a real-time system able to predict the network metrics and act on the network accordingly. It is also done a study about what network metrics can cause reduced accessibility in 4G networks, for the network operator to act more efficiently and pro-actively to avoid such eventsEm redes 5G, séries temporais serão omnipresentes para a monitorização de métricas de rede. Com o aumento do número de dispositivos da Internet das Coisas (IoT) nos próximos anos, é esperado que o número de fluxos de séries temporais em tempo real cresça a um ritmo elevado. Para monitorizar esses fluxos, testar e correlacionar diferentes algoritmos e métricas simultaneamente e de maneira integrada, a previsão de séries temporais está a tornar-se essencial para a gestão preventiva bem sucedida da rede. O objetivo desta dissertação é desenhar, implementar e testar um sistema de previsão numa rede de comunicações, que permite integrar várias redes diferentes, como por exemplo uma rede veicular e uma rede 4G de operador, para melhorar a fiabilidade e a qualidade de serviço (QoS). Para isso, a dissertação tem três objetivos principais: (1) a análise de diferentes datasets de rede e subsequente implementação de diferentes abordagens para previsão de métricas de rede, para testar diferentes técnicas; (2) o desenho e implementação de uma arquitetura distribuída de previsão de séries temporais em tempo real, para permitir ao operador de rede efetuar previsões sobre as métricas de rede; e finalmente, (3) o uso de modelos de previsão criados anteriormente e sua aplicação para melhorar o desempenho da rede utilizando políticas de gestão de recursos. Os testes efetuados com dois datasets diferentes, endereçando os casos de uso de gestão de congestionamento e divisão de recursos numa rede com recursos limitados, mostram que o desempenho da rede pode ser melhorado com gestão preventiva da rede efetuada por um sistema em tempo real capaz de prever métricas de rede e atuar em conformidade na rede. Também é efetuado um estudo sobre que métricas de rede podem causar reduzida acessibilidade em redes 4G, para o operador de rede atuar mais eficazmente e proativamente para evitar tais acontecimentos.Mestrado em Engenharia de Computadores e Telemátic

    Study and application of machine learning techniques to the deployment of services on 5G optical networks

    Get PDF
    The vision of the future 5G corresponds to a highly heterogeneous network at different levels; the increment in the number of services requests for the 5G networks imposes several technical challenges. In the 5G context, in the recent years, several machine learning-based approaches have been demonstrated as useful tools for making easier the networks’ management, by considering that different unexpected events could make that the services cannot be satisfied at the moment they are requested. Such approaches are usually referred as cognitive network management. There are too many parameters inside the 5G network affecting each layer of the network; the virtualization and abstraction of the services is a crucial part for a satisfactory service deployment, being the monitoring and control of the different planes the two keys inside the cognitive network management. In this project it has been addressed the implementation of a simulated data collector as well as the study of several machine learning-based approaches. This way, possible future performance can be predicted, giving to the system the ability to change the initial parameters and to adapt the network to future demands

    AI-driven, Context-Aware Profiling for 5G and Beyond Networks

    Get PDF
    In the era of Industrial Internet of Things (IIoT) and Industry 4.0, an immense volume of heterogeneous network devices will coexist and contend for shared network resources, in order to satisfy the very challenging IIoT applications, requiring ultra-reliable and ultra-low latency communications. Although novel key enablers, such as Network Slicing, Software Defined Networking (SDN) and Network Function Virtualization (NFV) have already offered significant advantages towards more efficient and flexible network and resource management approaches, the particular characteristics of IIoT applications pose additional burdens, mainly due to the complex wireless environments, high number of heterogeneous network devices, sensors, user equipments (UEs), etc., which may stochastically demand and contend for the -often scarce -computing and communication resources of industrial environments. To this end, this paper introduces PRIMATE, a novel, Artificial Intelligence (AI)-driven framework for the profiling of the networking behavior of such UEs, devices, users and things, which is able to operate in conjunction with already standardized or forthcoming, AI-based network resource management processes towards further gains. The novelty and potential of the proposed work lies on the fact that instead of attempting to either predict raw network metrics in a reactive manner, or predict the behavior of specific network entities/devices in an isolated manner, a big data-driven classification approach is introduced, which models the behavior of any network device/user from both a macroscopic, as well as service-specific perspective. The extended evaluation at the last part of this work shows the validity and viability of the proposed framework.This work has been partially supported by EC H2020 5GPPP 5Growth project (Grant 856709)

    Resource management with adaptive capacity in C-RAN

    Get PDF
    This work was supported in part by the Spanish ministry of science through the projectRTI2018-099880-B-C32, with ERFD funds, and the Grant FPI-UPC provided by theUPC. It has been done under COST CA15104 IRACON EU project.Efficient computational resource management in 5G Cloud Radio Access Network (CRAN) environments is a challenging problem because it has to account simultaneously for throughput, latency, power efficiency, and optimization tradeoffs. This work proposes the use of a modified and improved version of the realistic Vienna Scenario that was defined in COST action IC1004, to test two different scale C-RAN deployments. First, a large-scale analysis with 628 Macro-cells (Mcells) and 221 Small-cells (Scells) is used to test different algorithms oriented to optimize the network deployment by minimizing delays, balancing the load among the Base Band Unit (BBU) pools, or clustering the Remote Radio Heads (RRH) efficiently to maximize the multiplexing gain. After planning, real-time resource allocation strategies with Quality of Service (QoS) constraints should be optimized as well. To do so, a realistic small-scale scenario for the metropolitan area is defined by modeling the individual time-variant traffic patterns of 7000 users (UEs) connected to different services. The distribution of resources among UEs and BBUs is optimized by algorithms, based on a realistic calculation of the UEs Signal to Interference and Noise Ratios (SINRs), that account for the required computational capacity per cell, the QoS constraints and the service priorities. However, the assumption of a fixed computational capacity at the BBU pools may result in underutilized or oversubscribed resources, thus affecting the overall QoS. As resources are virtualized at the BBU pools, they could be dynamically instantiated according to the required computational capacity (RCC). For this reason, a new strategy for Dynamic Resource Management with Adaptive Computational capacity (DRM-AC) using machine learning (ML) techniques is proposed. Three ML algorithms have been tested to select the best predicting approach: support vector machine (SVM), time-delay neural network (TDNN), and long short-term memory (LSTM). DRM-AC reduces the average of unused resources by 96 %, but there is still QoS degradation when RCC is higher than the predicted computational capacity (PCC). For this reason, two new strategies are proposed and tested: DRM-AC with pre-filtering (DRM-AC-PF) and DRM-AC with error shifting (DRM-AC-ES), reducing the average of unsatisfied resources by 99.9 % and 98 % compared to the DRM-AC, respectively

    Machine Learning Threatens 5G Security

    Get PDF
    Machine learning (ML) is expected to solve many challenges in the fifth generation (5G) of mobile networks. However, ML will also open the network to several serious cybersecurity vulnerabilities. Most of the learning in ML happens through data gathered from the environment. Un-scrutinized data will have serious consequences on machines absorbing the data to produce actionable intelligence for the network. Scrutinizing the data, on the other hand, opens privacy challenges. Unfortunately, most of the ML systems are borrowed from other disciplines that provide excellent results in small closed environments. The resulting deployment of such ML systems in 5G can inadvertently open the network to serious security challenges such as unfair use of resources, denial of service, as well as leakage of private and confidential information. Therefore, in this article we dig into the weaknesses of the most prominent ML systems that are currently vigorously researched for deployment in 5G. We further classify and survey solutions for avoiding such pitfalls of ML in 5G systems

    Network Traffic Classification in an NFV Environment using Supervised ML Algorithms, Journal of Telecommunications and Information Technology, 2021, nr 3

    Get PDF
    We have conducted research on the performance of six supervised machine learning (ML) algorithms used for network traffic classification in a virtual environment driven by network function virtualization (NFV). The performance-related analysis focused on the precision of the classification process, but also in time-intensity (speed) of the supervised ML algorithms. We devised specific traffic taxonomy using commonly used categories, with particular emphasis placed on VoIP and encrypted VoIP protocols serve as a basis of the 5G architecture. NFV is considered to be one of the foundations of 5G development, as the traditional networking components are fully virtualized, in many cases relaying on mixed cloud solutions, both of the premise- and public cloud-based variety. Virtual machines are being replaced by containers and application functions while most of the network traffic is flowing in the east-west direction within the cloud. The analysis performed has shown that in such an environment, the Decision Tree algorithm is best suited, among the six algorithms considered, for performing classification-related tasks, and offers the required speed that will introduce minimal delays in network flows, which is crucial in 5G networks, where packet delay requirements are of great significance. It has proven to be reliable and offered excellent overall performance across multiple network packet classes within a virtualized NFV network architecture. While performing the classification procedure, we were working only with the statistical network flow features, leaving out packet payload, source, destination- and port-related information, thus making the analysis valid not only from the technical, but also from the regulatory point of vie

    A Smart System for Future Generation based on the Internet of Things Employing Machine Learning, Deep Learning, and Artificial Intelligence : Comprehensive Survey

    Get PDF
    The Internet of Things (IoT) is a networked system including interconnected things, devices, and networks that utilize the internet for communication and data exchange. The entity engages in interactions with both its internal and external surroundings. The IoT is capable of seeing the surrounding environment and responding in a way that is appropriate and adaptive. The utilization of advanced technology in this context enhances the environment and thus enhances the overall well-being of humanity. The IoT facilitates inter-device communication, whether through physical or virtual means. The IoT facilitates the enhancement of environmental intelligence, enabling seamless connectivity across many devices at any given moment. The concepts centred on the IoT, such as augmented reality, high-resolution video streaming, autonomous vehicles, intelligent environments, and electronic healthcare, have become pervasive in contemporary society. These applications have requirements for faster data rates, larger bandwidths, enhanced capacities, decreased latencies, and increased throughputs. IoT and Machine learning (ML) are among the fields of research that have shown significant potential for advancement. ML and IoT are used to build intelligent systems. Those networks will modify the ways in which worldwide entities exchange information. This article gives a comprehensive survey of the upcoming 5G-IoT situation, as well as a study of IoT smart system applications and usages. In addition to covering the latest developments in ML and deep learning (DL) and their impact on 5G-IoT, this article describes a comprehensive study of these important enabling technologies and the developing use cases of 5G-IoT
    corecore