9 research outputs found

    REALISTIC MODELING OF HANDOVER EVENTS IN A MULTI-CARRIER 5G NETWORK: A PRELIMINARY STEP TOWARDS COP-KPI RELATIONSHIP REALIZATION

    Get PDF
    The ever-increasing demand for mobile data traffic along with new use cases are set to make the current cellular network technology obsolete and give rise to a newer and better one in the form of 5G. This arising technology is coming with a promise of massive capacity, ultra-high reliability and close to zero latency, however, coming alongside is additional complexity. 5G is expected to carry along with it more than 5000 confi guration and optimization parameters (COPs). These COPs are the backbone of a network as most of the Key Performance Indicators (KPIs) relies on the proper settings of these COPs. To set these parameters optimally, it is imperative that the relationship between COPs and KPIs be understood. However, to date, this relationship between COPs and KPIs is known to some extend but is not fully realized. But mining the COP-KPI relationship is not a dead end. Machine Learning (ML) can be leveraged to learn KPI behavior with changes in COPs. Yet, ML's full potential is bounded by the lack of representative data in the wireless community to effectively train these models. Gathering these data is, in itself, a challenge. Real data from live network is abundant, yet not representative. Although simulator is a promising source of data, its performance lies on how realistic and detailed the modeling and implementation of its functions are. In this thesis paper, we have presented a realistic and comprehensive modeling of one of the most important functions of a wireless network: the handover function. In line with 3GPP standards, we have modeled and implemented more than 20 handover related COPs. The model is incorporated in a python-based simulator to generate data. Validation and evaluation are done to prove the model accuracy and its effectiveness in capturing real handover procedure. Use cases are also presented to show its capability to simulate different COP settings and show the effects on KPIs. This thesis paper is presented as an initial step in generating representative dataset to train machine learning to model COP-KPI relationship

    A Machine Learning based Framework for KPI Maximization in Emerging Networks using Mobility Parameters

    Full text link
    Current LTE network is faced with a plethora of Configuration and Optimization Parameters (COPs), both hard and soft, that are adjusted manually to manage the network and provide better Quality of Experience (QoE). With 5G in view, the number of these COPs are expected to reach 2000 per site, making their manual tuning for finding the optimal combination of these parameters, an impossible fleet. Alongside these thousands of COPs is the anticipated network densification in emerging networks which exacerbates the burden of the network operators in managing and optimizing the network. Hence, we propose a machine learning-based framework combined with a heuristic technique to discover the optimal combination of two pertinent COPs used in mobility, Cell Individual Offset (CIO) and Handover Margin (HOM), that maximizes a specific Key Performance Indicator (KPI) such as mean Signal to Interference and Noise Ratio (SINR) of all the connected users. The first part of the framework leverages the power of machine learning to predict the KPI of interest given several different combinations of CIO and HOM. The resulting predictions are then fed into Genetic Algorithm (GA) which searches for the best combination of the two mentioned parameters that yield the maximum mean SINR for all users. Performance of the framework is also evaluated using several machine learning techniques, with CatBoost algorithm yielding the best prediction performance. Meanwhile, GA is able to reveal the optimal parameter setting combination more efficiently and with three orders of magnitude faster convergence time in comparison to brute force approach

    Towards addressing training data scarcity challenge in emerging radio access networks: a survey and framework

    Get PDF
    The future of cellular networks is contingent on artificial intelligence (AI) based automation, particularly for radio access network (RAN) operation, optimization, and troubleshooting. To achieve such zero-touch automation, a myriad of AI-based solutions are being proposed in literature to leverage AI for modeling and optimizing network behavior to achieve the zero-touch automation goal. However, to work reliably, AI based automation, requires a deluge of training data. Consequently, the success of the proposed AI solutions is limited by a fundamental challenge faced by cellular network research community: scarcity of the training data. In this paper, we present an extensive review of classic and emerging techniques to address this challenge. We first identify the common data types in RAN and their known use-cases. We then present a taxonomized survey of techniques used in literature to address training data scarcity for various data types. This is followed by a framework to address the training data scarcity. The proposed framework builds on available information and combination of techniques including interpolation, domain-knowledge based, generative adversarial neural networks, transfer learning, autoencoders, fewshot learning, simulators and testbeds. Potential new techniques to enrich scarce data in cellular networks are also proposed, such as by matrix completion theory, and domain knowledge-based techniques leveraging different types of network geometries and network parameters. In addition, an overview of state-of-the art simulators and testbeds is also presented to make readers aware of current and emerging platforms to access real data in order to overcome the data scarcity challenge. The extensive survey of training data scarcity addressing techniques combined with proposed framework to select a suitable technique for given type of data, can assist researchers and network operators in choosing the appropriate methods to overcome the data scarcity challenge in leveraging AI to radio access network automation

    Communication Requirements in 5G-Enabled Healthcare Applications: Review and Considerations

    No full text
    Fifth generation (5G) mobile communication technology can enable novel healthcare applications and augment existing ones. However, 5G-enabled healthcare applications demand diverse technical requirements for radio communication. Knowledge of these requirements is important for developers, network providers, and regulatory authorities in the healthcare sector to facilitate safe and effective healthcare. In this paper, we review, identify, describe, and compare the requirements for communication key performance indicators in relevant healthcare use cases, including remote robotic-assisted surgery, connected ambulance, wearable and implantable devices, and service robotics for assisted living, with a focus on quantitative requirements. We also compare 5G-healthcare requirements with the current state of 5G capabilities. Finally, we identify gaps in the existing literature and highlight considerations for this space

    A data-driven framework for inter-frequency handover failure prediction and mitigation

    No full text
    With 5G already deployed, challenges related to handover exacerbate due to the dense base station deployment operating on a motley of frequencies. In this paper, we present and evaluate a novel data-driven solution, to reduce inter-frequency handover failures (HOFs), hereafter referred to as TORIS (Transmit Power Tuning-based Handover Success Rate Improvement Scheme). TORIS is designed by developing and integrating two sub-solutions. First sub-solution consists of an Artificial Intelligence (AI)-based model to predict inter-frequency HOFs. In this model, we achieve higher than the state-of-the-art accuracy by leveraging two approaches. First, we devise a novel feature set by exploiting domain knowledge gathered from extensive drive test data analysis. Second, we exploit an extensive set of data augmentation techniques to address the class imbalance in training the HOF prediction model. The data augmentation techniques include Chow-Liu Bayesian Network and Generative Adversarial Network further improved by focusing the sampling only on the borderline. We also compare the performance of state-of-the-art AI models for predicting HOFs with and without augmented data. Results show that AdaBoost yields best performance for predicting HOFs. The second sub-solution is a heuristic scheme to tune the transmit (Tx) power of serving and target cells. Unlike the state-of-the-art approaches for HOF reduction that tune cell individual offset, TORIS targets the main cause of HOFs i.e., poor signal quality and propagation condition, by proactively varying the Tx power of the cells whenever a HOF is anticipated. Results show that TORIS outperforms the state-of-the-art HOF reduction solution and yields 40%-75% reduction in HOFs

    A data driven framework for QoE-aware intelligent EN-DC activation

    No full text
    In emerging 5G networks, User Equipment camps traditionally on 4G network. Later, if the user requests a 5G service, it can simultaneously camp on 4G and 5G using EUTRAN New-Radio Dual-Connectivity (EN-DC) approach. In EN-DC, poor radio-conditions in either 4G or 5G network can be detrimental to user Quality-of-Experience (QoE). Although operators want to maximize EN-DC activation to fully utilize the 5G network, sub-optimal parameter configuration to turn on ENDC can compromise key-performance-indicators due to excessive radio-link-failures (RLFs) or voice-muting. While the need to maximize the EN-DC activation is obvious for maximizing the 5G network's utility, RLF and mute avoidance are vital to maintain the QoE requirements. To achieve aforementioned tradeoff, this paper presents the first solution to optimally configure the EN-DC activation parameters. We collect two datasets from real network to develop machine-learning-models to predict RLF and muting, respectively. We also investigate and compare the potential of various under-sampling, oversampling, and synthetic data generation techniques including Tomek-Links and Generative Adversarial Networks for their potential to address the data imbalance problem inherent in the real network training data. Leveraging these models, we formulate and solve two QoE-aware optimization problems that can maximize EN-DC activation while minimizing RLF or voice-muting. System-level simulation-based results show that compared to state-of-the-art solution that does not take into account RLF or voice-muting risk in EN-DC activation, the proposed solution can intelligently determine ENDC activation criteria that minimize the risk of RLF and voice muting while giving the operator's desired level of priority to maximize 5G network utilization

    A data-driven self-optimization solution for inter-frequency mobility parameters in emerging networks

    No full text
    Densification and multi-band operation means inter-frequency handovers can become a bottleneck for mobile user experience in emerging cellular networks. The challenge is aggravated by the fact that there does not exist a method to optimize key inter-frequency handover parameters namely A5 time-to-trigger, A5-threshold1 and A5-threshold2. This paper presents a first study to analyze and optimize the three A5 parameters for jointly maximizing three key performance indicators that reflect mobile user experience: handover success rate (HOSR), reference signal received power (RSRP), and signal-to-interference-plus-noise-ratio (SINR). As analytical modeling cannot capture the system-level complexity, we exploit a data-driven approach. To minimize the training data generation time, we exploit shapley additive explanations (SHAP) sensitivity analysis. The insights from SHAP analysis allow the selective collection of the training data thereby enabling the easier implementation of the proposed solution in a real network. We show that joint RSRP, SINR and HOSR optimization problem is non-convex and solve it using genetic algorithm (GA). We then propose an intelligent mutation scheme for GA, which makes the solution 5x times faster than the legacy GA and 21x faster than the brute force search. This paper thus presents first solution to implement computationally efficient closed-loop self-optimization of inter-frequency mobility parameters
    corecore