46 research outputs found

    Efficient Service for Next Generation Network Slicing Architecture and Mobile Traffic Analysis Using Machine Learning Technique

    Get PDF
    The tremendous growth of mobile devices, IOT devices, applications and many other services have placed high demand on mobile and wireless network infrastructures. Much research and development of 5G mobile networks have found the way to support the huge volume of traffic, extracting of fine-gained analytics and agile management of mobile network elements, so that it can maximize the user experience. It is very challenging to accomplish the tasks as mobile networks increase the complexity, due to increases in the high volume of data penetration, devices, and applications. One of the solutions, advance machine learning techniques, can help to mitigate the large number of data and algorithm driven applications. This work mainly focus on extensive analysis of mobile traffic for improving the performance, key performance indicators and quality of service from the operations perspective. The work includes the collection of datasets and log files using different kind of tools in different network layers and implementing the machine learning techniques to analyze the datasets to predict mobile traffic activity. A wide range of algorithms were implemented to compare the analysis in order to identify the highest performance. Moreover, this thesis also discusses about network slicing architecture its use cases and how to efficiently use network slicing to meet distinct demands

    A Spectrogram Image-Based Network Anomaly Detection System Using Deep Convolutional Neural Network

    Get PDF
    The dynamics of computer networks have changed rapidly over the past few years due to a tremendous increase in the volume of the connected devices and the corresponding applications. This growth in the network’s size and our dependence on it for all aspects of our life have therefore resulted in the generation of many attacks on the network by malicious parties that are either novel or the mutations of the older attacks. These attacks pose many challenges for network security personnel to protect the computer and network nodes and corresponding data from possible intrusions. A network intrusion detection system (NIDS) can act as one of the efficient security solutions by constantly monitoring the network traffic to secure the entry points of a network. Despite enormous efforts by researchers, NIDS still suffers from a high false alarm rate (FAR) in detecting novel attacks. In this paper, we propose a novel NIDS framework based on a deep convolution neural network that utilizes network spectrogram images generated using the short-time Fourier transform. To test the efficiency of our proposed solution, we evaluated it using the CIC-IDS2017 dataset. The experimental results have shown about 2.5% - 4% improvement in accurately detecting intrusions compared to other deep learning (DL) algorithms while at the same time reducing the FAR by 4.3%-6.7% considering binary classification scenario. We also observed its efficiency for a 7-class classification scenario by achieving almost 98.75% accuracy with 0.56% - 3.72% improvement compared to other DL methodologies

    From statistical- to machine learning-based network traffic prediction

    Get PDF
    Nowadays, due to the exponential and continuous expansion of new paradigms such as Internet of Things (IoT), Internet of Vehicles (IoV) and 6G, the world is witnessing a tremendous and sharp increase of network traffic. In such large-scale, heterogeneous, and complex networks, the volume of transferred data, as big data, is considered a challenge causing different networking inefficiencies. To overcome these challenges, various techniques are introduced to monitor the performance of networks, called Network Traffic Monitoring and Analysis (NTMA). Network Traffic Prediction (NTP) is a significant subfield of NTMA which is mainly focused on predicting the future of network load and its behavior. NTP techniques can generally be realized in two ways, that is, statistical- and Machine Learning (ML)-based. In this paper, we provide a study on existing NTP techniques through reviewing, investigating, and classifying the recent relevant works conducted in this field. Additionally, we discuss the challenges and future directions of NTP showing that how ML and statistical techniques can be used to solve challenges of NTP.publishedVersio

    Potential of machine learning/Artificial Intelligence (ML/AI) for verifying configurations of 5G multi Radio Access Technology (RAT) base station

    Get PDF
    Abstract. The enhancements in mobile networks from 1G to 5G have greatly increased data transmission reliability and speed. However, concerns with 5G must be addressed. As system performance and reliability improve, ML and AI integration in products and services become more common. The integration teams in cellular network equipment creation test devices from beginning to end to ensure hardware and software parts function correctly. Radio unit integration is typically the first integration phase, where the radio is tested independently without additional network components like the BBU and UE. 5G architecture and the technology that it is using are explained further. The architecture defined by 3GPP for 5G differs from previous generations, using Network Functions (NFs) instead of network entities. This service-based architecture offers NF reusability to reduce costs and modularity, allowing for the best vendor options for customer radio products. 5G introduced the O-RAN concept to decompose the RAN architecture, allowing for increased speed, flexibility, and innovation. NG-RAN provided this solution to speed up the development and implementation process of 5G. The O-RAN concept aims to improve the efficiency of RAN by breaking it down into components, allowing for more agility and customization. The four protocols, the eCPRI interface, and the functionalities of fronthaul that NGRAN follows are expressed further. Additionally, the significance of NR is described with an explanation of its benefits. Some benefits are high data rates, lower latency, improved spectral efficiency, increased network flexibility, and improved energy efficiency. The timeline for 5G development is provided along with different 3GPP releases. Stand-alone and non-stand-alone architecture is integral while developing the 5G architecture; hence, it is also defined with illustrations. The two frequency bands that NR utilizes, FR1 and FR2, are expressed further. FR1 is a sub-6 GHz frequency band. It contains frequencies of low and high values; on the other hand, FR2 contains frequencies above 6GHz, comprising high frequencies. FR2 is commonly known as the mmWave band. Data collection for implementing the ML approaches is expressed that contains the test setup, data collection, data description, and data visualization part of the thesis work. The Test PC runs tests, executes test cases using test libraries, and collects data from various logs to analyze the system’s performance. The logs contain information about the test results, which can be used to identify issues and evaluate the system’s performance. The data collection part describes that the data was initially present in JSON files and extracted from there. The extraction took place using the Python code script and was then fed into an Excel sheet for further analysis. The data description explains the parameters that are taken while training the models. Jupyter notebook has been used for visualizing the data, and the visualization is carried out with the help of graphs. Moreover, the ML techniques used for analyzing the data are described. In total, three methods are used here. All the techniques come under the category of supervised learning. The explained models are random forest, XG Boost, and LSTM. These three models form the basis of ML techniques applied in the thesis. The results and discussion section explains the outcomes of the ML models and discusses how the thesis will be used in the future. The results include the parameters that are considered to apply the ML models to them. SINR, noise power, rxPower, and RSSI are the metrics that are being monitored. These parameters have variance, which is essential in evaluating the quality of the product test setup, the quality of the software being tested, and the state of the test environment. The discussion section of the thesis explains why the following parameters are taken, which ML model is most appropriate for the data being analyzed, and what the next steps are in implementation

    A Spectrogram Image-Based Network Anomaly Detection System Using Deep Convolutional Neural Network

    Get PDF
    The dynamics of computer networks have changed rapidly over the past few years due to a tremendous increase in the volume of the connected devices and the corresponding applications. This growth in the network’s size and our dependence on it for all aspects of our life have therefore resulted in the generation of many attacks on the network by malicious parties that are either novel or the mutations of the older attacks. These attacks pose many challenges for network security personnel to protect the computer and network nodes and corresponding data from possible intrusions. A network intrusion detection system (NIDS) can act as one of the efficient security solutions by constantly monitoring the network traffic to secure the entry points of a network. Despite enormous efforts by researchers, NIDS still suffers from a high false alarm rate (FAR) in detecting novel attacks. In this paper, we propose a novel NIDS framework based on a deep convolution neural network that utilizes network spectrogram images generated using the short-time Fourier transform. To test the efficiency of our proposed solution, we evaluated it using the CIC-IDS2017 dataset. The experimental results have shown about 2.5% − 4% improvement in accurately detecting intrusions compared to other deep learning (DL) algorithms while at the same time reducing the FAR by 4.3%−6.7% considering binary classification scenario. We also observed its efficiency for a 7-class classification scenario by achieving almost 98.75% accuracy with 0.56% − 3.72% improvement compared to other DL methodologies

    A Survey on Machine Learning-based Misbehavior Detection Systems for 5G and Beyond Vehicular Networks

    Get PDF
    Advances in Vehicle-to-Everything (V2X) technology and onboard sensors have significantly accelerated deploying Connected and Automated Vehicles (CAVs). Integrating V2X with 5G has enabled Ultra-Reliable Low Latency Communications (URLLC) to CAVs. However, while communication performance has been enhanced, security and privacy issues have increased. Attacks have become more aggressive, and attackers have become more strategic. Public Key Infrastructure (PKI) proposed by standardization bodies cannot solely defend against these attacks. Thus, in complementary of that, sophisticated systems should be designed to detect such attacks and attackers. Machine Learning (ML) has recently emerged as a key enabler to secure future roads. Various V2X Misbehavior Detection Systems (MDSs) have adopted this paradigm. However, analyzing these systems is a research gap, and developing effective ML-based MDSs is still an open issue. To this end, this paper comprehensively surveys and classifies ML-based MDSs as well as discusses and analyses them from security and ML perspectives. It also provides some learned lessons and recommendations for guiding the development, validation, and deployment of ML-based MDSs. Finally, this paper highlighted open research and standardization issues with some future directions

    AI-Based Traffic Forecasting in 5G Network

    Get PDF
    Forecasting of the telecommunication traffic is the foundation for enabling intelligent management features as cellular technologies evolve toward fifth-generation (5G) technology. Since a significant number of network slices are deployed over a 5G network, it is crucial to evaluate the resource requirements of each network slice and how they evolve over time. Mobile network carriers should investigate strategies for network optimization and resource allocation due to the steadily increasing mobile traffic. Network management and optimization strategies will be improved if mobile operators know the cellular traffic demand at a specific time and location beforehand. The most effective techniques nowadays devote computing resources in a dynamic manner based on mobile traffic prediction by machine learning techniques. However, the accuracy of the predictive models is critically important. In this work, we concentrate on forecasting the cellular traffic for the following 24 hours by employing temporal and spatiotemporal techniques, with the goal of improving the efficiency and accuracy of mobile traffic prediction. In fact, a set of real-world mobile traffic data is used to assess the efficacy of multiple neural network models in predicting cellular traffic in this study. The fully connected sequential network (FCSN), one-dimensional convolutional neural network (1D-CNN), single-shot learning LSTM (SS-LSTM), and autoregressive LSTM (AR-LSTM) are proposed in the temporal analysis. A 2-dimensional convolutional LSTM (2D-ConvLSTM) model is also proposed in the spatiotemporal framework to forecast cellular traffic over the next 24 hours. The 2D-ConvLSTM model, which can capture spatial relations via convolution operations and temporal dynamics through the LSTM network, is used after creating geographic grids. The results reveal that FCSN and 1D-CNN have comparable performance in univariate temporal analysis. However, 1D-CNN is a smaller network with less number of parameters. One of the other benefits of the proposed 1D-CNN is having less complexity and execution time for predicting traffic. Also, 2D-ConvLSTM outperforms temporal models. The 2D-ConvLSTM model can predict the next 24-hour traffic of internet, sms, and call with root mean square error (RMSE) values of 75.73, 26.60, and 15.02 and mean absolute error (MAE) values of 52.73, 14.42, and 8.98, respectively, which shows better performance compared to the state of the art methods due to capturing variables dependencies. It can be argued that this network has the capability to be utilized in network management and resource allocation in practical applications

    FortisEDoS: A Deep Transfer Learning-Empowered Economical Denial of Sustainability Detection Framework for Cloud-Native Network Slicing

    Get PDF
    Network slicing is envisaged as the key to unlocking revenue growth in 5G and beyond (B5G) networks. However, the dynamic nature of network slicing and the growing sophistication of DDoS attacks rises the menace of reshaping a stealthy DDoS into an Economical Denial of Sustainability (EDoS) attack. EDoS aims at incurring economic damages to service provider due to the increased elastic use of resources. Motivated by the limitations of existing defense solutions, we propose FortisEDoS, a novel framework that aims at enabling elastic B5G services that are impervious to EDoS attacks. FortisEDoS integrates a new deep learning-powered DDoS anomaly detection model, dubbed CG-GRU, that capitalizes on the capabilities of emerging graph and recurrent neural networks in capturing spatio-temporal correlations to accurately discriminate malicious behavior. Furthermore, FortisEDoS leverages transfer learning to effectively defeat EDoS attacks in newly deployed slices by exploiting the knowledge learned in a previously deployed slice. The experimental results demonstrate the superiority of CG-GRU in achieving higher detection performance of more than 92% with lower computation complexity. They show also that transfer learning can yield an attack detection sensitivity of above 91%, while accelerating the training process by at least 61%. Further analysis shows that FortisEDoS exhibits intuitive explainability of its decisions, fostering trust in deep learning-assisted systems
    corecore