11 research outputs found

    Mobile edge computing-based data-driven deep learning framework for anomaly detection

    Get PDF
    5G is anticipated to embed an artificial intelligence (AI)-empowerment to adroitly plan, optimize and manage the highly complex network by leveraging data generated at different positions of the network architecture. Outages and situation leading to congestion in a cell pose severe hazard for the network. High false alarms and inadequate accuracy are the major limitations of modern approaches for the anomaly—outage and sudden hype in traffic activity that may result in congestion—detection in mobile cellular networks. This indicates wasting limited resources that ultimately leads to an elevated operational expenditure (OPEX) and also interrupting quality of service (QoS) and quality of experience (QoE). Motivated by the outstanding success of deep learning (DL) technology, our study applies it for detection of the above-mentioned anomalies and also supports mobile edge computing (MEC) paradigm in which core network (CN)’s computations are divided across the cellular infrastructure among different MEC servers (co-located with base stations), to relief the CN. Each server monitors user activities of multiple cells and utilizes LL -layer feedforward deep neural network (DNN) fueled by real call detail record (CDR) dataset for anomaly detection. Our framework achieved 98.8% accuracy with 0.44% false positive rate (FPR)—notable improvements that surmount the deficiencies of the old studies. The numerical results explicate the usefulness and dominance of our proposed detector

    A Data-Driven Customer Quality of Experience System for a Cellular Network

    Get PDF

    Aspects of knowledge mining on minimizing drive tests in self-organizing cellular networks

    Get PDF
    The demand for mobile data traffic is about to explode and this drives operators to find ways to further increase the offered capacity in their networks. If networks are deployed in the traditional way, this traffic explosion will be addressed by increasing the number of network elements significantly. This is expected to increase the costs and the complexity of planning, operating and optimizing the networks. To ensure effective and cost-efficient operations, a higher degree of automation and self-organization is needed in the next generation networks. For this reason, the concept of self-organizing networks was introduced in LTE covering multitude of use cases. This was specifically done in the areas of self-configuration, self-optimization and selfhealing of networks. From an operator’s perspective, automated collection and analysis of field measurements while complementing the traditional drive test campaigns is one of the top use cases that can provide significant cost savings in self-organizing networks. This thesis studies the Minimization of Drive Tests in self-organizing cellular networks from three different aspects. The first aspect is network operations, and particularly the network fault management process, as the traditional drive tests are often conducted for troubleshooting purposes. The second aspect is network functionality, and particularly the technical details about the specified measurement and signaling procedures in different network elements that are needed for automating the collection of the field measurement data. The third aspect concerns the analysis of the measurement databases that is a process used for increasing the degree of automation and self-awareness in the networks, and particularly the mathematical means for autonomously finding meaningful patterns of knowledge from huge amounts of data. Although the above mentioned technical areas have been widely discussed in previous literature, it has been done separately and only a few papers discuss how for example, knowledge mining is employed for processing field measurement data in a way that minimizes the drive tests in self-organizing LTE networks. The objective of the thesis is to use well known knowledge mining principles to develop novel self-healing and self-optimization algorithms. These algorithms analyze MDT databases to detect coverage holes, sleeping cells and other geographical areas of anomalous network behavior. The results of the research suggest that by employing knowledge mining in processing the MDT databases, one can acquire knowledge for discriminating between different network problems and detecting anomalous network behavior. For example, downlink coverage optimization is enhanced by classifying RLF reports into coverage, interference and handover problems. Moreover, by incorporating a normalized power headroom report with the MDT reports, better discrimination between uplink coverage problems and the parameterization problems is obtained. Knowledge mining is also used to detect sleeping cells by means of supervised and unsupervised learning. The detection framework is based on a novel approach where diffusion mapping is used to learn about network behavior in its healthy state. The sleeping cells are detected by observing an increase in the number of anomalous reports associated with a certain cell. The association is formed by correlating the geographical location of anomalous reports with the estimated dominance areas of the cells. Moreover, RF fingerprint positioning of the MDT reports is studied and the results suggest that RF fingerprinting can provide a quite detailed location estimation in dense heterogeneous networks. In addition, self-optimization of the mobility state estimation parameters is studied in heterogeneous LTE networks and the results suggest that by gathering MDT measurements and constructing statistical velocity profiles, MSE parameters can be adjusted autonomously, thus resulting in reasonably good classification accuracy. The overall outcome of the thesis is as follows. By automating the classification of the measurement reports between certain problems, network engineers can acquire knowledge about the root causes of the performance degradation in the networks. This saves time and resources and results in a faster decision making process. Due to the faster decision making process the duration of network breaks become shorter and the quality of the network is improved. By taking into account the geographical locations of the anomalous field measurements in the network performance analysis, finer granularity for estimating the location of the problem areas can be achieved. This can further improve the operational decision making that guides the corresponding actions for example, where to start the network optimization. Moreover, by automating the time and resource consuming task of tuning the mobility state estimation parameters, operators can enhance the mobility performance of the high velocity UEs in heterogeneous radio networks in a cost-efficient and backward compatible manner

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Cell fault management using machine learning techniques

    Get PDF
    This paper surveys the literature relating to the application of machine learning to fault management in cellular networks from an operational perspective. We summarise the main issues as 5G networks evolve, and their implications for fault management. We describe the relevant machine learning techniques through to deep learning, and survey the progress which has been made in their application, based on the building blocks of a typical fault management system. We review recent work to develop the abilities of deep learning systems to explain and justify their recommendations to network operators. We discuss forthcoming changes in network architecture which are likely to impact fault management and offer a vision of how fault management systems can exploit deep learning in the future. We identify a series of research topics for further study in order to achieve this

    Unsupervised Machine Learning for Networking:Techniques, Applications and Research Challenges

    Get PDF
    While machine learning and artificial intelligence have long been applied in networking research, the bulk of such works has focused on supervised learning. Recently, there has been a rising trend of employing unsupervised machine learning using unstructured raw network data to improve network performance and provide services such as traffic engineering, anomaly detection, Internet traffic classification, and quality of service optimization. The interest in applying unsupervised learning techniques in networking emerges from their great success in other fields such as computer vision, natural language processing, speech recognition, and optimal control (e.g., for developing autonomous self-driving cars). Unsupervised learning is interesting since it can unconstrain us from the need of labeled data and manual handcrafted feature engineering thereby facilitating flexible, general, and automated methods of machine learning. The focus of this survey paper is to provide an overview of the applications of unsupervised learning in the domain of networking. We provide a comprehensive survey highlighting the recent advancements in unsupervised learning techniques and describe their applications in various learning tasks in the context of networking. We also provide a discussion on future directions and open research issues, while also identifying potential pitfalls. While a few survey papers focusing on the applications of machine learning in networking have previously been published, a survey of similar scope and breadth is missing in literature. Through this paper, we advance the state of knowledge by carefully synthesizing the insights from these survey papers while also providing contemporary coverage of recent advances

    Unsupervised Machine Learning for Networking:Techniques, Applications and Research Challenges

    Get PDF
    While machine learning and artificial intelligence have long been applied in networking research, the bulk of such works has focused on supervised learning. Recently there has been a rising trend of employing unsupervised machine learning using unstructured raw network data to improve network performance and provide services such as traffic engineering, anomaly detection, Internet traffic classification, and quality of service optimization. The interest in applying unsupervised learning techniques in networking emerges from their great success in other fields such as computer vision, natural language processing, speech recognition, and optimal control (e.g., for developing autonomous self-driving cars). Unsupervised learning is interesting since it can unconstrain us from the need of labeled data and manual handcrafted feature engineering thereby facilitating flexible, general, and automated methods of machine learning. The focus of this survey paper is to provide an overview of the applications of unsupervised learning in the domain of networking. We provide a comprehensive survey highlighting the recent advancements in unsupervised learning techniques and describe their applications for various learning tasks in the context of networking. We also provide a discussion on future directions and open research issues, while also identifying potential pitfalls. While a few survey papers focusing on the applications of machine learning in networking have previously been published, a survey of similar scope and breadth is missing in literature. Through this paper, we advance the state of knowledge by carefully synthesizing the insights from these survey papers while also providing contemporary coverage of recent advances

    An approach for network outage detection from drive-testing databases

    No full text
    A data-mining framework for analyzing a cellular network drive testing database is described in this paper. The presented method is designed to detect sleeping base stations, network outage, and change of the dominance areas in a cognitive and self-organizing manner. The essence of the method is to find similarities between periodical network measurements and previously known outage data. For this purpose, diffusion maps dimensionality reduction and nearest neighbor data classification methods are utilized. The method is cognitive because it requires training data for the outage detection. In addition, the method is autonomous because it uses minimization of drive testing (MDT) functionality to gather the training and testing data. Motivation of classifying MDT measurement reports to periodical, handover, and outage categories is to detect areas where periodical reports start to become similar to the outage samples. Moreover, these areas are associated with estimated dominance areas to detected sleeping base stations. In the studied verification case, measurement classification results in an increase of the amount of samples which can be used for detection of performance degradations, and consequently, makes the outage detection faster and more reliable.peerReviewe
    corecore