21 research outputs found

    Network Traffic Classification using Machine Learning for Software Defined Networks

    Get PDF
    The recent development in industry automation and connected devices made a huge demand for network resources. Traditional networks are becoming less effective to handle this large number of traffic generated by these technologies. At the same time, Software defined networking (SDN) introduced a programmable and scalable networking solution that enables Machine Learning (ML) applications to automate networks. Issues with traditional methods to classify network traffic and allocate resources can be solved by this SDN solution. Network data gathered by the SDN controller will allow data analytics methods to analyze and apply machine learning models to customize the network management. This paper has focused on analyzing network data and implement a network traffic classification solution using machine learning and integrate the model in software-defined networking platform

    Traffic Prediction Based on Random Connectivity in Deep Learning with Long Short-Term Memory

    Full text link
    Traffic prediction plays an important role in evaluating the performance of telecommunication networks and attracts intense research interests. A significant number of algorithms and models have been put forward to analyse traffic data and make prediction. In the recent big data era, deep learning has been exploited to mine the profound information hidden in the data. In particular, Long Short-Term Memory (LSTM), one kind of Recurrent Neural Network (RNN) schemes, has attracted a lot of attentions due to its capability of processing the long-range dependency embedded in the sequential traffic data. However, LSTM has considerable computational cost, which can not be tolerated in tasks with stringent latency requirement. In this paper, we propose a deep learning model based on LSTM, called Random Connectivity LSTM (RCLSTM). Compared to the conventional LSTM, RCLSTM makes a notable breakthrough in the formation of neural network, which is that the neurons are connected in a stochastic manner rather than full connected. So, the RCLSTM, with certain intrinsic sparsity, have many neural connections absent (distinguished from the full connectivity) and which leads to the reduction of the parameters to be trained and the computational cost. We apply the RCLSTM to predict traffic and validate that the RCLSTM with even 35% neural connectivity still shows a satisfactory performance. When we gradually add training samples, the performance of RCLSTM becomes increasingly closer to the baseline LSTM. Moreover, for the input traffic sequences of enough length, the RCLSTM exhibits even superior prediction accuracy than the baseline LSTM.Comment: 6 pages, 9 figure

    SHORT-TERM MOBILE DATA TRAFFIC FORECASTING: A CASE STUDY OF KADUNA STATE, NIGERIA

    Get PDF
    Mobile networks are essential for today's seamless communication. As more individuals subscribe to mobile networks, the need for mobile networks has increased significantly. The network operators must devise strategies to handle the enormous demand of mobile network resources, such as spectrum, which are costly. There is a need for effective network resource management as well as a mechanism to predict future networks that can be used for network management and planning. This study uses real-life data to forecast mobile traffic using Kaduna State as a case study and compared prediction algorithms with the hybrid. The data set was gotten from a network provider. The hybrid combination of LSTM and AGB has been proposed in this study, and its performance has been compared with LSTM and ARIMA using MAE, RMSE and MAPE as evaluation metrics. The prediction performance of the algorithms was carried out on ten base stations with both the highest and lowest traffic from two local government areas, which are Kaduna South and Kaduna North. The LSTM-AGB outperformed LSTM and ARIMA. From the performance evaluation, the RMSE, MAPE and MAE of all the selected base stations in LSTM-AGB have a lower value than LSTM and ARIMA, which indicates a good fit of the model. It was observed that the hybrid algorithm performed better in base stations with high traffi

    Quadri-dimensional approach for data analytics in mobile networks

    Get PDF
    The telecommunication market is growing at a very fast pace with the evolution of new technologies to support high speed throughput and the availability of a wide range of services and applications in the mobile networks. This has led to a need for communication service providers (CSPs) to shift their focus from network elements monitoring towards services monitoring and subscribers’ satisfaction by introducing the service quality management (SQM) and the customer experience management (CEM) that require fast responses to reduce the time to find and solve network problems, to ensure efficiency and proactive maintenance, to improve the quality of service (QoS) and the quality of experience (QoE) of the subscribers. While both the SQM and the CEM demand multiple information from different interfaces, managing multiple data sources adds an extra layer of complexity with the collection of data. While several studies and researches have been conducted for data analytics in mobile networks, most of them did not consider analytics based on the four dimensions involved in the mobile networks environment which are the subscriber, the handset, the service and the network element with multiple interface correlation. The main objective of this research was to develop mobile network analytics models applied to the 3G packet-switched domain by analysing data from the radio network with the Iub interface and the core network with the Gn interface to provide a fast root cause analysis (RCA) approach considering the four dimensions involved in the mobile networks. This was achieved by using the latest computer engineering advancements which are Big Data platforms and data mining techniques through machine learning algorithms.Electrical and Mining EngineeringM. Tech. (Electrical Engineering

    Machine learning adaptive computational capacity prediction for dynamic resource management in C-RAN

    Get PDF
    Efficient computational resource management in 5G Cloud Radio Access Network (C-RAN)environments is a challenging problem because it has to account simultaneously for throughput, latency,power efficiency, and optimization tradeoffs. The assumption of a fixed computational capacity at thebaseband unit (BBU) pools may result in underutilized or oversubscribed resources, thus affecting the overallQuality of Service (QoS). As resources are virtualized at the BBU pools, they could be dynamically instan-tiated according to the required computational capacity (RCC). In this paper, a new strategy for DynamicResource Management with Adaptive Computational capacity (DRM-AC) using machine learning (ML)techniques is proposed. Three ML algorithms have been tested to select the best predicting approach: supportvector machine (SVM), time-delay neural network (TDNN), and long short-term memory (LSTM). DRM-AC reduces the average of unused resources by 96 %, but there is still QoS degradation when RCC is higherthan the predicted computational capacity (PCC). To further improve, two new strategies are proposed andtested in a realistic scenario: DRM-AC with pre-filtering (DRM-AC-PF) and DRM-AC with error shifting(DRM-AC-ES), reducing the average of unsatisfied resources by 98 % and 99.9 % compared to the DRM-AC, respectivelyThis work was supported in part by the Spanish ministry of science through the project CRIN-5G (RTI2018-099880-B-C32) withERDF (European Regional Development Fund) and in part by the UPC through COST CA15104 IRACON EU Project and theFPI-UPC-2018 Grant.Peer ReviewedPostprint (published version

    Mobile internet activity estimation and analysis at high granularity: SVR model approach

    Get PDF
    Understanding of mobile internet traffic patterns and capacity to estimate future traffic, particularly at high spatiotemporal granularity, is crucial for proactive decision making in emerging and future cognizant cellular networks enabled with self-organizing features. It becomes even more important in the world of `Internet of Things' with machines communicating locally. In this paper, internet activity data from a mobile network operator Call Detail Records (CDRs) is analysed at high granularity to study the spatiotemporal variance and traffic patterns. To estimate future traffic at high granularity, a Support Vector Regression (SVR) based traffic model is trained and evaluated for the prediction of maximum, minimum and average internet traffic in the next hour based on the actual traffic in the last hour. Performance of the model is compared with that of the State-of-the-Art (SOTA) deep learning models recently proposed in the literature for the same data, same granularity, and same predicates. It is concluded that this SVR model outperforms the SOTA deep and non-deep learning methods used in the literature

    Machine Learning Algorithms for Smart Data Analysis in Internet of Things Environment: Taxonomies and Research Trends

    Get PDF
    Machine learning techniques will contribution towards making Internet of Things (IoT) symmetric applications among the most significant sources of new data in the future. In this context, network systems are endowed with the capacity to access varieties of experimental symmetric data across a plethora of network devices, study the data information, obtain knowledge, and make informed decisions based on the dataset at its disposal. This study is limited to supervised and unsupervised machine learning (ML) techniques, regarded as the bedrock of the IoT smart data analysis. This study includes reviews and discussions of substantial issues related to supervised and unsupervised machine learning techniques, highlighting the advantages and limitations of each algorithm, and discusses the research trends and recommendations for further study

    Forecasting Network Traffic: A Survey and Tutorial with Open-Source Comparative Evaluation

    Get PDF
    This paper presents a review of the literature on network traffic prediction, while also serving as a tutorial to the topic. We examine works based on autoregressive moving average models, like ARMA, ARIMA and SARIMA, as well as works based on Artifical Neural Networks approaches, such as RNN, LSTM, GRU, and CNN. In all cases, we provide a complete and self-contained presentation of the mathematical foundations of each technique, which allows the reader to get a full understanding of the operation of the different proposed methods. Further, we perform numerical experiments based on real data sets, which allows comparing the various approaches directly in terms of fitting quality and computational costs. We make our code publicly available, so that readers can readily access a wide range of forecasting tools, and possibly use them as benchmarks for more advanced solutions

    Seguridad en redes de computadoras: estrategias y desafíos en la era de big data

    Get PDF
    As computer networks have transformed in essential tools, their security has become a crucial problem for computer systems. Detecting unusual values from large volumes of information produced by network traffic has acquired huge interest in the network security area. Anomaly detection is a starting point to prevent attacks, therefore it is important for all computer systems in a network have a system of detecting anomalous events in a time near their occurrence. Detecting these events can lead network administrators to identify system failures, take preventive actions and avoid a massive damage. This work presents, first, how identify network traffic anomalies through applying parallel computing techniques and Graphical Processing Units in two algorithms, one of them a supervised classification algorithm and the other based in network traffic image processing. Finally, it is proposed as a challenge to resolve the anomalies detection using an unsupervised algorithm as Deep Learning.Dado que las redes de computadoras se han transformado en una herramienta esencial, su seguridad se ha convertido en un problema crucial para los sistemas de computación. Detectar valores inusuales en grandes volúmenes de información producidos por el tráfico de red ha adquirido un enorme interés en el área de seguridad de redes. La detección de anomalías es el punto de partida para prevenir ataques, por lo tanto es importante para todos los sistemas de computación pertenecientes a una red tener un sistema de detección de eventos anómalos en un tiempo cercano a su ocurrencia. Detectar estos eventos permitiría a los administradores de red identificar fallas en el sistema, tomar acciones preventivas y evitar daños masivos. Este trabajo presenta, primero, cómo identificar anomalías de tráfico en la red aplicando técnicas de computación paralela y Unidades de Procesamiento Gráfico en dos algoritmos, un algoritmo de clasificación supervisada y otro basado en procesamiento de imágenes de tráfico de red. Finalmente, se propone como desafío resolver la detección de anomalías usando un algoritmo no supervisado como Aprendizaje Profundo.Facultad de Informátic

    Secure Computer Network: Strategies and Challengers in Big Data Era

    Get PDF
    As computer networks have transformed in essential tools, their security has become a crucial problem for computer systems. Detecting unusual values from large volumes of information produced by network traffic has acquired huge interest in the network security area. Anomaly detection is a starting point to prevent attacks, therefore it is important for all computer systems in a network have a system of detecting anomalous events in a time near their occurrence. Detecting these events can lead network administrators to identify system failures, take preventive actions and avoid a massive damage. This work presents, first, how identify network traffic anomalies through applying parallel computing techniques and Graphical Processing Units in two algorithms, one of them a supervised classification algorithm and the other based in traffic image processing. Finally, it is proposed as a challenge to resolve the anomalies detection using an unsupervised algorithm as Deep Learning.Facultad de Informátic
    corecore