8 research outputs found

    Cell degradation detection based on an inter-cell approach

    Get PDF
    Fault management is a crucial part of cellular network management systems. The status of the base stations is usually monitored by well-defined key performance indicators (KPIs). The approaches for cell degradation detection are based on either intra-cell or inter-cell analysis of the KPIs. In intra-cell analysis, KPI profiles are built based on their local history data whereas in inter-cell analysis, KPIs of one cell are compared with the corresponding KPIs of the other cells. In this work, we argue in favor of the inter-cell approach and apply a degradation detection method that is able to detect a sleeping cell that could be difficult to observe using traditional intra-cell methods. We demonstrate its use for detecting emulated degradations among performance data recorded from a live LTE network. The method can be integrated in current systems because it can operate using existing KPIs without any major modification to the network infrastructure

    Self-Supervised Transformer Architecture for Change Detection in Radio Access Networks

    Full text link
    Radio Access Networks (RANs) for telecommunications represent large agglomerations of interconnected hardware consisting of hundreds of thousands of transmitting devices (cells). Such networks undergo frequent and often heterogeneous changes caused by network operators, who are seeking to tune their system parameters for optimal performance. The effects of such changes are challenging to predict and will become even more so with the adoption of 5G/6G networks. Therefore, RAN monitoring is vital for network operators. We propose a self-supervised learning framework that leverages self-attention and self-distillation for this task. It works by detecting changes in Performance Measurement data, a collection of time-varying metrics which reflect a set of diverse measurements of the network performance at the cell level. Experimental results show that our approach outperforms the state of the art by 4% on a real-world based dataset consisting of about hundred thousands timeseries. It also has the merits of being scalable and generalizable. This allows it to provide deep insight into the specifics of mode of operation changes while relying minimally on expert knowledge.Comment: Accepted by 2023 IEEE International Conference on Communications (ICC) Machine Learning for Communications and Networking Trac

    Outlier Detection Mechanism for Ensuring Availability in Wireless Mobile Networks Anomaly Detection

    Get PDF
    Finding things that are significantly different from, incomparable with, and inconsistent with the majority of data in many domains is the focus of the important research problem of anomaly detection. A noteworthy research problem has recently been illuminated by the explosion of data that has been gathered. This offers brand-new opportunities as well as difficulties for anomaly detection research. The analysis and monitoring of data connected to network traffic, weblogs, medical domains, financial transactions, transportation domains, and many more are just a few of the areas in which anomaly detection is useful. An important part of assessing the effectiveness of mobile ad hoc networks (MANET) is anomaly detection. Due to difficulties in the associated protocols, MANET has become a popular study topic in recent years. No matter where they are geographically located, users can connect to a dynamic infrastructure using MANETs. Small, powerful, and affordable devices enable MANETs to self-organize and expand quickly. By an outlier detection approach, the proposed work provides cryptographic property and availability for an RFID-WSN integrated network with node counts ranging from 500 to 5000. The detection ratio and anomaly scores are used to measure the system's resistance to outliers. The suggested method uses anomaly scores to identify outliers and provide defence against DoS attacks. The suggested method uses anomaly scores to identify outliers and provide protection from DoS attacks. The proposed method has been shown to detect intruders in a matter of milliseconds without interfering with authorised users' privileges. Throughput is improved by at least 6.8% using the suggested protocol, while Packet Delivery Ratio (PDR) is improved by at least 9.2% and by as much as 21.5%

    Accessibility Degradation Prediction on LTE/SAE Network Using Discrete Time Markov Chain (DTMC) Model

    Get PDF
    In this paper, an algorithm for predicting accessibility performance on an LTE/SAE network based on relevant historical key performance indicator (KPI) data is proposed. Since there are three KPIs related to accessibility, each representing different segments, a method to map these three KPI values onto the status of accessibility performance is proposed. The network conditions are categorized as high, acceptable or low for each time interval of observation. The first state shows that the system is running optimally, while the second state shows that the system has deteriorated and needs full attention, and the third state indicates that the system has gone into degraded conditions that cannot be tolerated. After the state sequence has been obtained, a transition probability matrix can be derived, which can be used to predict future conditions using a DTMC model. The results obtained are system predictions in terms of probability values for each state for a specific future time. These prediction values are required for proactive health monitoring and fault management. Accessibility degradation prediction is then conducted by using measurement data derived from an eNodeB in the LTE network for a period of one month

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Context-Aware Self-Healing for Small Cell Networks

    Get PDF
    These can be an invaluable source of information for the management of the network, in a way that we have denominated as context-aware SON, which is the approach proposed in this thesis. To develop this concept, the thesis follows a top-down approach. Firstly, the characteristics of the cellular deployments are assessed, especially for indoor small cell networks. In those scenarios, the need for context-aware SON is evaluated and considered indispensable. Secondly, a new cellular architecture is defined to integrate both context information and SON mechanisms in the management plane of the mobile network. Thus, the specifics of making context an integral part of cellular OAM/SON are defined. Also, the real-world implementation of the architecture is proposed. Thirdly, from the established general SON architecture, a logical self-healing framework is defined to support the context-aware healing mechanisms to be developed. Fourthly, different self-healing algorithms are defined depending on the failures to be managed and the conditions of the considered scenario. The mechanisms are based on probabilistic analysis, making use of both context and network data for detection and diagnosis of cellular issues. The conditions for the implementation of these methods are assessed. Their applicability is evaluated by means of simulators and testbed trials. The results show important improvements in performance and capabilities in comparison to previous methods, demonstrating the relevance of the proposed approach.The last years have seen a continuous increase in the use of mobile communications. To cope with the growing traffic, recently deployed technologies have deepened the adoption of small cells (low powered base stations) to serve areas with high demand or coverage issues, where macrocells can be both unsuccessful or inefficient. Also, new cellular and non-cellular technologies (e.g. WiFi) coexist with legacy ones, including also multiple deployment schemes (macrocell, small cells), in what is known as heterogeneous networks (HetNets). Due to the huge complexity of HetNets, their operation, administration and management (OAM) became increasingly difficult. To overcome this, the NGMN Alliance and the 3GPP defined the Self-Organizing Network (SON) paradigm, aiming to automate the OAM procedures to reduce their costs and increase the resulting performance. One key focus of SON is the self-healing of the network, covering the automatic detection of problems, the diagnosis of their causes, their compensation and their recovery. Until recently, SON mechanisms have been solely based on the analysis of alarms and performance indicators. However, on the one hand, this approach has become very limited given the complexity of the scenarios, and particularly in indoor cellular environments. Here, the deployment of small cells, their coexistence with multiple telecommunications systems and the nature of those environments (in terms of propagation, coverage overlapping, fast demand changes and users' mobility) introduce many challenges for classic SON. On the other hand, modern user equipment (e.g. smartphones), equipped with powerful processors, sensors and applications, generate a huge amount of context information. Context refers to those variables not directly associated with the telecommunication service, but with the terminals and their environment. This includes the user's position, applications, social data, etc

    Self-organization for 5G and beyond mobile networks using reinforcement learning

    Get PDF
    The next generations of mobile networks 5G and beyond, must overcome current networks limitations as well as improve network performance. Some of the requirements envisioned for future mobile networks are: addressing the massive growth required in coverage, capacity and traffic; providing better quality of service and experience to end users; supporting ultra high data rates and reliability; ensuring latency as low as one millisecond, among others. Thus, in order for future networks to enable all of these stringent requirements, a promising concept has emerged, self organising networks (SONs). SONs consist of making mobile networks more adaptive and autonomous and are divided in three main branches, depending on their use-cases, namely: self-configuration, self-optimisation, and self-healing. SON is a very promising and broad concept, and in order to enable it, more intelligence needs to be embedded in the mobile network. As such, one possible solution is the utilisation of machine learning (ML) algorithms. ML has many branches, such as supervised, unsupervised and Reinforcement Learning (RL), and all can be used in different SON use-cases. The objectives of this thesis are to explore different RL techniques in the context of SONs, more specifically in self-optimization use-cases. First, the use-case of user-cell association in future heterogeneous networks is analysed and optimised. This scenario considers not only Radio Access Network (RAN) constraints, but also in terms of the backhaul. Based on this, a distributed solution utilizing RL is proposed and compared with other state-of-the-art methods. Results show that the proposed RL algorithm outperforms current ones and is able to achieve better user satisfaction, while minimizing the number of users in outage. Another objective of this thesis is the evaluation of Unmanned Aerial vehicles (UAVs) to optimize cellular networks. It is envisioned that UAVs can be utilized in different SON use-cases and integrated with RL algorithms to determine their optimal 3D positions in space according to network constraints. As such, two different mobile network scenarios are analysed, one emergency and a pop-up network. The emergency scenario considers that a major natural disaster destroyed most of the ground network infrastructure and the goal is to provide coverage to the highest number of users possible using UAVs as access points. The second scenario simulates an event happening in a city and, because of the ground network congestion, network capacity needs to be enhanced by the deployment of aerial base stations. For both scenarios different types of RL algorithms are considered and their complexity and convergence are analysed. In both cases it is shown that UAVs coupled with RL are capable of solving network issues in an efficient and quick manner. Thus, due to its ability to learn from interaction with an environment and from previous experience, without knowing the dynamics of the environment, or relying on previously collected data, RL is considered as a promising solution to enable SON
    corecore