485 research outputs found

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Methods for Self-Healing based on traces and unsupervised learning in Self-Organizing Networks

    Get PDF
    With the advent of Long-Term Evolution (LTE) networks and the spread of a highly varied range of services, mobile operators are increasingly aware of the need to strengthen their maintenance and operational tasks in order to ensure a quality and positive user experience. Furthermore, the co- existence of multiple Radio Access Technologies (RAT), the increase in the traffic demand and the need to provide a great variety of services are steering the cellular network toward a new scenario where management tasks are becoming increasingly complex. As a result, mobile operators are focusing their efforts to deal with the maintenance of their networks without increasing either operational expenditures (OPEX) or capital expenditures (CAPEX). In this context, it is becoming necessary to effectively automate the management tasks through the concept of the Self-Organizing Networks (SON). In particular, SON functions cover three different areas: Self-Configuration, Self-Optimization and Self- Healing. Self-Configuration automates the deployment of new network elements and their parameter configuration. Self-Optimization is in charge of modifying the configuration of the parameters in order to enhance user experience. Finally, Self-Healing aims reduce the impact that failures and services degradation have on the end-user. To that end, Self-Healing (SH) systems monitor the network elements through several alarms, measurements and indicators in order to detect outage and degraded cells, then, diagnose the cause of their problem and, finally, execute the compensation or recovery actions. Even though mobile networks are become more prone to failures due to their huge increase in complexity, the automation of the troubleshooting tasks through the SH functionality has not been fully realized. Traditionally, both the research and the development of SON networks have been related to Self-Configuration and Self-Optimization. This has been mainly due to the challenges that need to be faced when SH systems are studied and implemented. This is especially relevant in the case of fault diagnosis. However, mobile operators are paying increasingly more attention to self-healing systems, which entails creating options to face those challenges that allow the development of SH functions. On the one hand, currently, the diagnosis continues to be manually done since it requires considerable hard-earned experience in order to be able to effectively identify the fault cause. In particular, troubleshooting experts thoroughly analyze the performance of the degraded network elements by means of measurements and indicators in order to identify the cause of the detected anomalies and symptoms. Therefore, automating the diagnosis tasks means knowing what specific performance indicators have to be analyzed and how to map the identified symptoms with the associate fault cause. This knowledge is acquired over time and it is characterized by being operator-specific based on their policies and network features. Furthermore, troubleshooting experts typically solve the failures in a network without either documenting the troubleshooting process or recording the analyzed indicators along with the label of the identified fault cause. In addition, because there is no specific regulation on documentation, the few documented faults are neither properly defined nor described in a standard way (e.g. the same fault cause may be appointed with different labels), making it even more difficult to automate the extraction of the expert knowledge. As a result, this a lack of documentation and lack of historical reported faults makes automation of diagnosis process more challenging. On the other hand, when the exact root cause cannot be remotely identified through the statistical information gathered at cell level, drive test are scheduled for further information. These drive tests aim to monitor mobile network performance by using vehicles to personally measure the radio interface quality along a predefined route. In particular, the troubleshooting experts use specialized test equipment in order to manually collect user-level measurements. Consequently, drive test entail a hefty expense for mobile operators, since it involves considerable investment in time and costly resources (such as personal, vehicles and complex test equipment). In this context, the Third Generation Partnership Project (3GPP) has standardized the automatic collection of field measurements (e.g. signaling messages, radio measurements and location information) through the mobile traces features and its extended functionality, the Minimization of Drive Tests (MDT). In particular, those features allow to automatically monitor the network performance in detail, reaching areas that cannot be covered by drive testing (e.g. indoor or private zones). Thus, mobile traces are regarded as an important enabler for SON since they avoid operators to rely on those expensive drive tests while, at the same time, provide greater details than the traditional cell-level indicators. As a result, enhancing the SH functionalities through the mobile traces increases the potential cost savings and the granularity of the analysis. Hence, in this thesis, several solutions are proposed to overcome the limitations that prevent the development of SH with special emphasis on the diagnosis phase. To that end, the lack of historical labeled databases has been addressed in two main ways. First, unsupervised techniques have been used to automatically design diagnosis system from real data without requiring either documentation or historical reports about fault cases. Second, a group of significant faults have been modeled and implemented in a dynamic system level simulator in order to generate an artificial labeled database, which is extremely important in evaluating and comparing the proposed solutions with the state-of- the-art algorithm. Then, the diagnosis of those faults that cannot be identified through the statistical performance indicators gathered at cell level is automated by the analysis of the mobile traces avoiding the costly drive test. In particular, in this thesis, the mobile traces have been used to automatically identify the cause of each unexpected user disconnection, to geo-localize RF problems that affect the cell performance and to identify the impact of a fault depending on the availability of legacy systems (e.g. Third Generation, 3G). Finally, the proposed techniques have been validated using real and simulated LTE data by analyzing its performance and comparing it with reference mechanisms

    Benefits and limits of machine learning for the implicit coordination on SON functions

    Get PDF
    Bedingt durch die Einführung neuer Netzfunktionen in den Mobilfunknetzen der nächsten Generation, z. B. Slicing oder Mehrantennensysteme, sowie durch die Koexistenz mehrerer Funkzugangstechnologien, werden die Optimierungsaufgaben äußerst komplex und erhöhen die OPEX (OPerational EXpenditures). Um den Nutzern Dienste mit wettbewerbsfähiger Dienstgüte (QoS) zu bieten und gleichzeitig die Betriebskosten niedrig zu halten, wurde von den Standardisierungsgremien das Konzept des selbstorganisierenden Netzes (SON) eingeführt, um das Netzmanagement um eine Automatisierungsebene zu erweitern. Es wurden dafür mehrere SON-Funktionen (SFs) vorgeschlagen, um einen bestimmten Netzbereich, wie Abdeckung oder Kapazität, zu optimieren. Bei dem konventionellen Entwurf der SFs wurde jede Funktion als Regler mit geschlossenem Regelkreis konzipiert, der ein lokales Ziel durch die Einstellung bestimmter Netzwerkparameter optimiert. Die Beziehung zwischen mehreren SFs wurde dabei jedoch bis zu einem gewissen Grad vernachlässigt. Daher treten viele widersprüchliche Szenarien auf, wenn mehrere SFs in einem mobilen Netzwerk instanziiert werden. Solche widersprüchlichen Funktionen in den Netzen verschlechtern die QoS der Benutzer und beeinträchtigen die Signalisierungsressourcen im Netz. Es wird daher erwartet, dass eine existierende Koordinierungsschicht (die auch eine Entität im Netz sein könnte) die Konflikte zwischen SFs lösen kann. Da diese Funktionen jedoch eng miteinander verknüpft sind, ist es schwierig, ihre Interaktionen und Abhängigkeiten in einer abgeschlossenen Form zu modellieren. Daher wird maschinelles Lernen vorgeschlagen, um eine gemeinsame Optimierung eines globalen Leistungsindikators (Key Performance Indicator, KPI) so voranzubringen, dass die komplizierten Beziehungen zwischen den Funktionen verborgen bleiben. Wir nennen diesen Ansatz: implizite Koordination. Im ersten Teil dieser Arbeit schlagen wir eine zentralisierte, implizite und auf maschinellem Lernen basierende Koordination vor und wenden sie auf die Koordination zweier etablierter SFs an: Mobility Robustness Optimization (MRO) und Mobility Load Balancing (MLB). Anschließend gestalten wir die Lösung dateneffizienter (d. h. wir erreichen die gleiche Modellleistung mit weniger Trainingsdaten), indem wir eine geschlossene Modellierung einbetten, um einen Teil des optimalen Parametersatzes zu finden. Wir nennen dies einen "hybriden Ansatz". Mit dem hybriden Ansatz untersuchen wir den Konflikt zwischen MLB und Coverage and Capacity Optimization (CCO) Funktionen. Dann wenden wir ihn auf die Koordinierung zwischen MLB, Inter-Cell Interference Coordination (ICIC) und Energy Savings (ES) Funktionen an. Schließlich stellen wir eine Möglichkeit vor, MRO formal in den hybriden Ansatz einzubeziehen, und zeigen, wie der Rahmen erweitert werden kann, um anspruchsvolle Netzwerkszenarien wie Ultra-Reliable Low Latency Communications (URLLC) abzudecken.Due to the introduction of new network functionalities in next-generation mobile networks, e.g., slicing or multi-antenna systems, as well as the coexistence of multiple radio access technologies, the optimization tasks become extremely complex, increasing the OPEX (OPerational EXpenditures). In order to provide services to the users with competitive Quality of Service (QoS) while keeping low operational costs, the Self-Organizing Network (SON) concept was introduced by the standardization bodies to add an automation layer to the network management. Thus, multiple SON functions (SFs) were proposed to optimize a specific network domain, like coverage or capacity. The conventional design of SFs conceived each function as a closed-loop controller optimizing a local objective by tuning specific network parameters. However, the relationship among multiple SFs was neglected to some extent. Therefore, many conflicting scenarios appear when multiple SFs are instantiated in a mobile network. Having conflicting functions in the networks deteriorates the users’ QoS and affects the signaling resources in the network. Thus, it is expected to have a coordination layer (which could also be an entity in the network), conciliating the conflicts between SFs. Nevertheless, due to interleaved linkage among those functions, it is complex to model their interactions and dependencies in a closed form. Thus, machine learning is proposed to drive a joint optimization of a global Key Performance Indicator (KPI), hiding the intricate relationships between functions. We call this approach: implicit coordination. In the first part of this thesis, we propose a centralized, fully-implicit coordination approach based on machine learning (ML), and apply it to the coordination of two well-established SFs: Mobility Robustness Optimization (MRO) and Mobility Load Balancing (MLB). We find that this approach can be applied as long as the coordination problem is decomposed into three functional planes: controllable, environmental, and utility planes. However, the fully-implicit coordination comes at a high cost: it requires a large amount of data to train the ML models. To improve the data efficiency of our approach (i.e., achieving good model performance with less training data), we propose a hybrid approach, which mixes ML with closed-form models. With the hybrid approach, we study the conflict between MLB and Coverage and Capacity Optimization (CCO) functions. Then, we apply it to the coordination among MLB, Inter-Cell Interference Coordination (ICIC), and Energy Savings (ES) functions. With the hybrid approach, we find in one shot, part of the parameter set in an optimal manner, which makes it suitable for dynamic scenarios in which fast response is expected from a centralized coordinator. Finally, we present a manner to formally include MRO in the hybrid approach and show how the framework can be extended to cover challenging network scenarios like Ultra-Reliable Low Latency Communications (URLLC)

    Data-Driven Prediction for Reliable Mission-Critical Communications

    Get PDF

    User mobility prediction and management using machine learning

    Get PDF
    The next generation mobile networks (NGMNs) are envisioned to overcome current user mobility limitations while improving the network performance. Some of the limitations envisioned for mobility management in the future mobile networks are: addressing the massive traffic growth bottlenecks; providing better quality and experience to end users; supporting ultra high data rates; ensuring ultra low latency, seamless handover (HOs) from one base station (BS) to another, etc. Thus, in order for future networks to manage users mobility through all of the stringent limitations mentioned, artificial intelligence (AI) is deemed to play a key role automating end-to-end process through machine learning (ML). The objectives of this thesis are to explore user mobility predictions and management use-cases using ML. First, background and literature review is presented which covers, current mobile networks overview, and ML-driven applications to enable user’s mobility and management. Followed by the use-cases of mobility prediction in dense mobile networks are analysed and optimised with the use of ML algorithms. The overall framework test accuracy of 91.17% was obtained in comparison to all other mobility prediction algorithms through artificial neural network (ANN). Furthermore, a concept of mobility prediction-based energy consumption is discussed to automate and classify user’s mobility and reduce carbon emissions under smart city transportation achieving 98.82% with k-nearest neighbour (KNN) classifier as an optimal result along with 31.83% energy savings gain. Finally, context-aware handover (HO) skipping scenario is analysed in order to improve over all quality of service (QoS) as a framework of mobility management in next generation networks (NGNs). The framework relies on passenger mobility, trains trajectory, travelling time and frequency, network load and signal ratio data in cardinal directions i.e, North, East, West, and South (NEWS) achieving optimum result of 94.51% through support vector machine (SVM) classifier. These results were fed into HO skipping techniques to analyse, coverage probability, throughput, and HO cost. This work is extended by blockchain-enabled privacy preservation mechanism to provide end-to-end secure platform throughout train passengers mobility

    The Aalborg Survey / Part 4 - Literature Study:Diverse Urban Spaces (DUS)

    Get PDF

    Addressing training data sparsity and interpretability challenges in AI based cellular networks

    Get PDF
    To meet the diverse and stringent communication requirements for emerging networks use cases, zero-touch arti cial intelligence (AI) based deep automation in cellular networks is envisioned. However, the full potential of AI in cellular networks remains hindered by two key challenges: (i) training data is not as freely available in cellular networks as in other fields where AI has made a profound impact and (ii) current AI models tend to have black box behavior making operators reluctant to entrust the operation of multibillion mission critical networks to a black box AI engine, which allow little insights and discovery of relationships between the configuration and optimization parameters and key performance indicators. This dissertation systematically addresses and proposes solutions to these two key problems faced by emerging networks. A framework towards addressing the training data sparsity challenge in cellular networks is developed, that can assist network operators and researchers in choosing the optimal data enrichment technique for different network scenarios, based on the available information. The framework encompasses classical interpolation techniques, like inverse distance weighted and kriging to more advanced ML-based methods, like transfer learning and generative adversarial networks, several new techniques, such as matrix completion theory and leveraging different types of network geometries, and simulators and testbeds, among others. The proposed framework will lead to more accurate ML models, that rely on sufficient amount of representative training data. Moreover, solutions are proposed to address the data sparsity challenge specifically in Minimization of drive test (MDT) based automation approaches. MDT allows coverage to be estimated at the base station by exploiting measurement reports gathered by the user equipment without the need for drive tests. Thus, MDT is a key enabling feature for data and artificial intelligence driven autonomous operation and optimization in current and emerging cellular networks. However, to date, the utility of MDT feature remains thwarted by issues such as sparsity of user reports and user positioning inaccuracy. For the first time, this dissertation reveals the existence of an optimal bin width for coverage estimation in the presence of inaccurate user positioning, scarcity of user reports and quantization error. The presented framework can enable network operators to configure the bin size for given positioning accuracy and user density that results in the most accurate MDT based coverage estimation. The lack of interpretability in AI-enabled networks is addressed by proposing a first of its kind novel neural network architecture leveraging analytical modeling, domain knowledge, big data and machine learning to turn black box machine learning models into more interpretable models. The proposed approach combines analytical modeling and domain knowledge to custom design machine learning models with the aim of moving towards interpretable machine learning models, that not only require a lesser training time, but can also deal with issues such as sparsity of training data and determination of model hyperparameters. The approach is tested using both simulated data and real data and results show that the proposed approach outperforms existing mathematical models, while also remaining interpretable when compared with black-box ML models. Thus, the proposed approach can be used to derive better mathematical models of complex systems. The findings from this dissertation can help solve the challenges in emerging AI-based cellular networks and thus aid in their design, operation and optimization
    • …
    corecore