685 research outputs found

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    A PARADIGM SHIFTING APPROACH IN SON FOR FUTURE CELLULAR NETWORKS

    Get PDF
    The race to next generation cellular networks is on with a general consensus in academia and industry that massive densification orchestrated by self-organizing networks (SONs) is the cost-effective solution to the impending mobile capacity crunch. While the research on SON commenced a decade ago and is still ongoing, the current form (i.e., the reactive mode of operation, conflict-prone design, limited degree of freedom and lack of intelligence) hinders the current SON paradigm from meeting the requirements of 5G. The ambitious quality of experience (QoE) requirements and the emerging multifarious vision of 5G, along with the associated scale of complexity and cost, demand a significantly different, if not totally new, approach to SONs in order to make 5G technically as well as financially feasible. This dissertation addresses these limitations of state-of-the-art SONs. It first presents a generic low-complexity optimization framework to allow for the agile, on-line, multi-objective optimization of future mobile cellular networks (MCNs) through only top-level policy input that prioritizes otherwise conflicting key performance indicators (KPIs) such as capacity, QoE, and power consumption. The hybrid, semi-analytical approach can be used for a wide range of cellular optimization scenarios with low complexity. The dissertation then presents two novel, user-mobility, prediction-based, proactive self-optimization frameworks (AURORA and OPERA) to transform mobility from a challenge into an advantage. The proposed frameworks leverage mobility to overcome the inherent reactiveness of state-of-the-art self-optimization schemes to meet the extremely low latency and high QoE expected from future cellular networks vis-à-vis 5G and beyond. The proactiveness stems from the proposed frameworks’ novel capability of utilizing past hand-over (HO) traces to determine future cell loads instead of observing changes in cell loads passively and then reacting to them. A semi-Markov renewal process is leveraged to build a model that can predict the cell of the next HO and the time of the HO for the users. A low-complexity algorithm has been developed to transform the predicted mobility attributes to a user-coordinate level resolution. The learned knowledge base is used to predict the user distribution among cells. This prediction is then used to formulate a novel (i) proactive energy saving (ES) optimization problem (AURORA) that proactively schedules cell sleep cycles and (ii) proactive load balancing (LB) optimization problem (OPERA). The proposed frameworks also incorporate the effect of cell individual offset (CIO) for balancing the load among cells, and they thus exploit an additional ultra-dense network (UDN)-specific mechanism to ensure QoE while maximizing ES and/or LB. The frameworks also incorporates capacity and coverage constraints and a load-aware association strategy for ensuring the conflict-free operation of ES, LB, and coverage and capacity optimization (CCO) SON functions. Although the resulting optimization problems are combinatorial and NP-hard, proactive prediction of cell loads instead of reactive measurement allows ample time for combination of heuristics such as genetic programming and pattern search to find solutions with high ES and LB yields compared to the state of the art. To address the challenge of significantly higher cell outage rates in anticipated in 5G and beyond due to higher operational complexity and cell density than legacy networks, the dissertation’s fourth key contribution is a stochastic analytical model to analyze the effects of the arrival of faults on the reliability behavior of a cellular network. Assuming exponential distributions for failures and recovery, a reliability model is developed using the continuous-time Markov chains (CTMC) process. Unlike previous studies on network reliability, the proposed model is not limited to structural aspects of base stations (BSs), and it takes into account diverse potential fault scenarios; it is also capable of predicting the expected time of the first occurrence of the fault and the long-term reliability behavior of the BS. The contributions of this dissertation mark a paradigm shift from the reactive, semi-manual, sub-optimal SON towards a conflict-free, agile, proactive SON. By paving the way for future MCN’s commercial and technical viability, the new SON paradigm presented in this dissertation can act as a key enabler for next-generation MCNs

    A New Paradigm for Proactive Self-Healing in Future Self-Organizing Mobile Cellular Networks

    Get PDF
    Mobile cellular network operators spend nearly a quarter of their revenue on network management and maintenance. Remarkably, a significant proportion of that budget is spent on resolving outages that degrade or disrupt cellular services. Historically, operators have mainly relied on human expertise to identify, diagnose and resolve such outages while also compensating for them in the short-term. However, with ambitious quality of experience expectations from 5th generation and beyond mobile cellular networks spurring research towards technologies such as ultra-dense heterogeneous networks and millimeter wave spectrum utilization, discovering and compensating coverage lapses in future networks will be a major challenge. Numerous studies have explored heuristic, analytical and machine learning-based solutions to autonomously detect, diagnose and compensate cell outages in legacy mobile cellular networks, a branch of research known as self-healing. This dissertation focuses on self-healing techniques for future mobile cellular networks, with special focus on outage detection and avoidance components of self-healing. Network outages can be classified into two primary types: 1) full and 2) partial. Full outages result from failed soft or hard components of network entities while partial outages are generally a consequence of parametric misconfiguration. To this end, chapter 2 of this dissertation is dedicated to a detailed survey of research on detecting, diagnosing and compensating full outages as well as a detailed analysis of studies on proactive outage avoidance schemes and their challenges. A key observation from the analysis of the state-of-the-art outage detection techniques is their dependence on full network coverage data, susceptibility to noise or randomness in the data and inability to characterize outages in both spacial domain and temporal domain. To overcome these limitations, chapters 3 and 4 present two unique and novel outage detection techniques. Chapter 3 presents an outage detection technique based on entropy field decomposition which combines information field theory and entropy spectrum pathways theory and is robust to noise variance. Chapter 4 presents a deep learning neural network algorithm which is robust to data sparsity and compares it with entropy field decomposition and other state-of-the-art machine learning-based outage detection algorithms including support vector machines, K-means clustering, independent component analysis and deep auto-encoders. Based on the insights obtained regarding the impact of partial outages, chapter 5 presents a complete framework for 5th generation and beyond mobile cellular networks that is designed to avoid partial outages caused by parametric misconfiguration. The power of the proposed framework is demonstrated by leveraging it to design a solution that tackles one of the most common problems associated with ultra-dense heterogeneous networks, namely imbalanced load among small and macro cells, and poor resource utilization as a consequence. The optimization problem is formulated as a function of two hard parameters namely antenna tilt and transmit power, and a soft parameter, cell individual offset, that affect the coverage, capacity and load directly. The resulting solution is a combination of the otherwise conflicting coverage and capacity optimization and load balancing self-organizing network functions

    Exploiting user contention to optimize proactive resource allocation in future networks

    Get PDF
    In order to provide ubiquitous communication, seamless connectivity is now required in all environments including highly mobile networks. By using vertical handover techniques it is possible to provide uninterrupted communication as connections are dynamically switched between wireless networks as users move around. However, in a highly mobile environment, traditional reactive approaches to handover are inadequate. Therefore, proactive handover techniques, in which mobile nodes attempt to determine the best time and place to handover to local networks, are actively being investigated in the context of next generation mobile networks. The Y-Comm Framework which looks at proactive handover techniques has de�fined two key parameters: Time Before Handover and the Network Dwell Time, for any given network topology. Using this approach, it is possible to enhance resource management in common networks using probabilistic mechanisms because it is now possible to express contention for resources in terms of: No Contention, Partial Contention and Full Contention. As network resources are shared between many users, resource management must be a key part of any communication system as it is needed to provide seamless communication and to ensure that applications and servers receive their required Quality-of-Service. In this thesis, the contention for channel resources being allocated to mobile nodes is analysed. The work presents a new methodology to support proactive resource allocation for emerging future networks such as Vehicular Ad-Hoc Networks (VANETs) by allowing us to calculate the probability of contention based on user demand of network resources. These results are veri�ed using simulation. In addition, this proactive approach is further enhanced by the use of a contention queue to detect contention between incoming requests and those waiting for service. This thesis also presents a new methodology to support proactive resource allocation for future networks such as Vehicular Ad-Hoc Networks. The proposed approach has been applied to a vehicular testbed and results are presented that show that this approach can improve overall network performance in mobile heterogeneous environments. The results show that the analysis of user contention does provide a proactive mechanism to improve the performance of resource allocation in mobile networks

    User mobility prediction and management using machine learning

    Get PDF
    The next generation mobile networks (NGMNs) are envisioned to overcome current user mobility limitations while improving the network performance. Some of the limitations envisioned for mobility management in the future mobile networks are: addressing the massive traffic growth bottlenecks; providing better quality and experience to end users; supporting ultra high data rates; ensuring ultra low latency, seamless handover (HOs) from one base station (BS) to another, etc. Thus, in order for future networks to manage users mobility through all of the stringent limitations mentioned, artificial intelligence (AI) is deemed to play a key role automating end-to-end process through machine learning (ML). The objectives of this thesis are to explore user mobility predictions and management use-cases using ML. First, background and literature review is presented which covers, current mobile networks overview, and ML-driven applications to enable user’s mobility and management. Followed by the use-cases of mobility prediction in dense mobile networks are analysed and optimised with the use of ML algorithms. The overall framework test accuracy of 91.17% was obtained in comparison to all other mobility prediction algorithms through artificial neural network (ANN). Furthermore, a concept of mobility prediction-based energy consumption is discussed to automate and classify user’s mobility and reduce carbon emissions under smart city transportation achieving 98.82% with k-nearest neighbour (KNN) classifier as an optimal result along with 31.83% energy savings gain. Finally, context-aware handover (HO) skipping scenario is analysed in order to improve over all quality of service (QoS) as a framework of mobility management in next generation networks (NGNs). The framework relies on passenger mobility, trains trajectory, travelling time and frequency, network load and signal ratio data in cardinal directions i.e, North, East, West, and South (NEWS) achieving optimum result of 94.51% through support vector machine (SVM) classifier. These results were fed into HO skipping techniques to analyse, coverage probability, throughput, and HO cost. This work is extended by blockchain-enabled privacy preservation mechanism to provide end-to-end secure platform throughout train passengers mobility

    Cognitive networking for next generation of cellular communication systems

    Get PDF
    This thesis presents a comprehensive study of cognitive networking for cellular networks with contributions that enable them to be more dynamic, agile, and efficient. To achieve this, machine learning (ML) algorithms, a subset of artificial intelligence, are employed to bring such cognition to cellular networks. More specifically, three major branches of ML, namely supervised, unsupervised, and reinforcement learning (RL), are utilised for various purposes: unsupervised learning is used for data clustering, while supervised learning is employed for predictions on future behaviours of networks/users. RL, on the other hand, is utilised for optimisation purposes due to its inherent characteristics of adaptability and requiring minimal knowledge of the environment. Energy optimisation, capacity enhancement, and spectrum access are identified as primary design challenges for cellular networks given that they are envisioned to play crucial roles for 5G and beyond due to the increased demand in the number of connected devices as well as data rates. Each design challenge and its corresponding proposed solution are discussed thoroughly in separate chapters. Regarding energy optimisation, a user-side energy consumption is investigated by considering Internet of things (IoT) networks. An RL based intelligent model, which jointly optimises the wireless connection type and data processing entity, is proposed. In particular, a Q-learning algorithm is developed, through which the energy consumption of an IoT device is minimised while keeping the requirement of the applications--in terms of response time and security--satisfied. The proposed methodology manages to result in 0% normalised joint cost--where all the considered metrics are combined--while the benchmarks performed 54.84% on average. Next, the energy consumption of radio access networks (RANs) is targeted, and a traffic-aware cell switching algorithm is designed to reduce the energy consumption of a RAN without compromising on the user quality-of-service (QoS). The proposed technique employs a SARSA algorithm with value function approximation, since the conventional RL methods struggle with solving problems with huge state spaces. The results reveal that up to 52% gain on the total energy consumption is achieved with the proposed technique, and the gain is observed to reduce when the scenario becomes more realistic. On the other hand, capacity enhancement is studied from two different perspectives, namely mobility management and unmanned aerial vehicle (UAV) assistance. Towards that end, a predictive handover (HO) mechanism is designed for mobility management in cellular networks by identifying two major issues of Markov chains based HO predictions. First, revisits--which are defined as a situation whereby a user visits the same cell more than once within the same day--are diagnosed as causing similar transition probabilities, which in turn increases the likelihood of making incorrect predictions. This problem is addressed with a structural change; i.e., rather than storing 2-D transition matrix, it is proposed to store 3-D one that also includes HO orders. The obtained results show that 3-D transition matrix is capable of reducing the HO signalling cost by up to 25.37%, which is observed to drop with increasing randomness level in the data set. Second, making a HO prediction with insufficient criteria is identified as another issue with the conventional Markov chains based predictors. Thus, a prediction confidence level is derived, such that there should be a lower bound to perform HO predictions, which are not always advantageous owing to the HO signalling cost incurred from incorrect predictions. The outcomes of the simulations confirm that the derived confidence level mechanism helps in improving the prediction accuracy by up to 8.23%. Furthermore, still considering capacity enhancement, a UAV assisted cellular networking is considered, and an unsupervised learning-based UAV positioning algorithm is presented. A comprehensive analysis is conducted on the impacts of the overlapping footprints of multiple UAVs, which are controlled by their altitudes. The developed k-means clustering based UAV positioning approach is shown to reduce the number of users in outage by up to 80.47% when compared to the benchmark symmetric deployment. Lastly, a QoS-aware dynamic spectrum access approach is developed in order to tackle challenges related to spectrum access, wherein all the aforementioned types of ML methods are employed. More specifically, by leveraging future traffic load predictions of radio access technologies (RATs) and Q-learning algorithm, a novel proactive spectrum sensing technique is introduced. As such, two different sensing strategies are developed; the first one focuses solely on sensing latency reduction, while the second one jointly optimises sensing latency and user requirements. In particular, the proposed Q-learning algorithm takes the future load predictions of the RATs and the requirements of secondary users--in terms of mobility and bandwidth--as inputs and directs the users to the spectrum of the optimum RAT to perform sensing. The strategy to be employed can be selected based on the needs of the applications, such that if the latency is the only concern, the first strategy should be selected due to the fact that the second strategy is computationally more demanding. However, by employing the second strategy, sensing latency is reduced while satisfying other user requirements. The simulation results demonstrate that, compared to random sensing, the first strategy decays the sensing latency by 85.25%, while the second strategy enhances the full-satisfaction rate, where both mobility and bandwidth requirements of the user are simultaneously satisfied, by 95.7%. Therefore, as it can be observed, three key design challenges of the next generation of cellular networks are identified and addressed via the concept of cognitive networking, providing a utilitarian tool for mobile network operators to plug into their systems. The proposed solutions can be generalised to various network scenarios owing to the sophisticated ML implementations, which renders the solutions both practical and sustainable

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig
    • …
    corecore