26 research outputs found

    Machine Learning Meets Communication Networks: Current Trends and Future Challenges

    Get PDF
    The growing network density and unprecedented increase in network traffic, caused by the massively expanding number of connected devices and online services, require intelligent network operations. Machine Learning (ML) has been applied in this regard in different types of networks and networking technologies to meet the requirements of future communicating devices and services. In this article, we provide a detailed account of current research on the application of ML in communication networks and shed light on future research challenges. Research on the application of ML in communication networks is described in: i) the three layers, i.e., physical, access, and network layers; and ii) novel computing and networking concepts such as Multi-access Edge Computing (MEC), Software Defined Networking (SDN), Network Functions Virtualization (NFV), and a brief overview of ML-based network security. Important future research challenges are identified and presented to help stir further research in key areas in this direction

    Admission Control Optimisation for QoS and QoE Enhancement in Future Networks

    Get PDF
    Recent exponential growth in demand for traffic heterogeneity support and the number of associated devices has considerably increased demand for network resources and induced numerous challenges for the networks, such as bottleneck congestion, and inefficient admission control and resource allocation. Challenges such as these degrade network Quality of Service (QoS) and user-perceived Quality of Experience (QoE). This work studies admission control from various perspectives. For example, two novel single-objective optimisation-based admission control models, Dynamica Slice Allocation and Admission Control (DSAAC) and Signalling and Admission Control (SAC), are presented to enhance future limited-capacity network Grade of Service (GoS), and for control signalling optimisation, respectively. DSAAC is an integrated model whereby a cost-estimation function based on user demand and network capacity quantifies resource allocation among users. Moreover, to maximise resource utility, adjustable minimum and maximum slice resource bounds have also been derived. In the case of user blocking from the primary slice due to congestion or resource scarcity, a set of optimisation algorithms on inter-slice admission control and resource allocation and adaptability of slice elasticity have been proposed. A novel SAC model uses an unsupervised learning technique (i.e. Ranking-based clustering) for optimal clustering based on users’ homogeneous demand characteristics to minimise signalling redundancy in the access network. The redundant signalling reduction reduces the additional burden on the network in terms of unnecessary resource utilisation and computational time. Moreover, dynamically reconfigurable QoE-based slice performance bounds are also derived in the SAC model from multiple demand characteristics for clustered user admission to the optimal network. A set of optimisation algorithms are also proposed to attain efficient slice allocation and users’ QoE enhancement via assessing the capability of slice QoE elasticity. An enhancement of the SAC model is proposed through a novel multi-objective optimisation model named Edge Redundancy Minimisation and Admission Control (E-RMAC). A novel E-RMAC model for the first time considers the issue of redundant signalling between the edge and core networks. This model minimises redundant signalling using two classical unsupervised learning algorithms, K-mean and Ranking-based clustering, and maximises the efficiency of the link (bandwidth resources) between the edge and core networks. For multi-operator environments such as Open-RAN, a novel Forecasting and Admission Control (FAC) model for tenant-aware network selection and configuration is proposed. The model features a dynamic demand-estimation scheme embedded with fuzzy-logic-based optimisation for optimal network selection and admission control. FAC for the first time considers the coexistence of the various heterogeneous cellular technologies (2G, 3G,4G, and 5G) and their integration to enhance overall network throughput by efficient resource allocation and utilisation within a multi-operator environment. A QoS/QoE-based service monitoring feature is also presented to update the demand estimates with the support of a forecasting modifier. he provided service monitoring feature helps resource allocation to tenants, approximately closer to the actual demand of the tenants, to improve tenant-acquired QoE and overall network performance. Foremost, a novel and dynamic admission control model named Slice Congestion and Admission Control (SCAC) is also presented in this thesis. SCAC employs machine learning (i.e. unsupervised, reinforcement, and transfer learning) and multi-objective optimisation techniques (i.e. Non-dominated Sorting Genetic Algorithm II ) to minimise bottleneck and intra-slice congestion. Knowledge transfer among requests in form of coefficients has been employed for the first time for optimal slice requests queuing. A unified cost estimation function is also derived in this model for slice selection to ensure fairness among slice request admission. In view of instantaneous network circumstances and load, a reinforcement learning-based admission control policy is established for taking appropriate action on guaranteed soft and best-effort slice requests admissions. Intra-slice, as well as inter-slice resource allocation, along with the adaptability of slice elasticity, are also proposed for maximising slice acceptance ratio and resource utilisation. Extensive simulation results are obtained and compared with similar models found in the literature. The proposed E-RMAC model is 35% superior at reducing redundant signalling between the edge and core networks compared to recent work. The E-RMAC model reduces the complexity from O(U) to O(R) for service signalling and O(N) for resource signalling. This represents a significant saving in the uplink control plane signalling and link capacity compared to the results found in the existing literature. Similarly, the SCAC model reduces bottleneck congestion by approximately 56% over the entire load compared to ground truth and increases the slice acceptance ratio. Inter-slice admission and resource allocation offer admission gain of 25% and 51% over cooperative slice- and intra-slice-based admission control and resource allocation, respectively. Detailed analysis of the results obtained suggests that the proposed models can efficiently manage future heterogeneous traffic flow in terms of enhanced throughput, maximum network resources utilisation, better admission gain, and congestion control

    Machine Learning for Unmanned Aerial System (UAS) Networking

    Get PDF
    Fueled by the advancement of 5G new radio (5G NR), rapid development has occurred in many fields. Compared with the conventional approaches, beamforming and network slicing enable 5G NR to have ten times decrease in latency, connection density, and experienced throughput than 4G long term evolution (4G LTE). These advantages pave the way for the evolution of Cyber-physical Systems (CPS) on a large scale. The reduction of consumption, the advancement of control engineering, and the simplification of Unmanned Aircraft System (UAS) enable the UAS networking deployment on a large scale to become feasible. The UAS networking can finish multiple complex missions simultaneously. However, the limitations of the conventional approaches are still a big challenge to make a trade-off between the massive management and efficient networking on a large scale. With 5G NR and machine learning, in this dissertation, my contributions can be summarized as the following: I proposed a novel Optimized Ad-hoc On-demand Distance Vector (OAODV) routing protocol to improve the throughput of Intra UAS networking. The novel routing protocol can reduce the system overhead and be efficient. To improve the security, I proposed a blockchain scheme to mitigate the malicious basestations for cellular connected UAS networking and a proof-of-traffic (PoT) to improve the efficiency of blockchain for UAS networking on a large scale. Inspired by the biological cell paradigm, I proposed the cell wall routing protocols for heterogeneous UAS networking. With 5G NR, the inter connections between UAS networking can strengthen the throughput and elasticity of UAS networking. With machine learning, the routing schedulings for intra- and inter- UAS networking can enhance the throughput of UAS networking on a large scale. The inter UAS networking can achieve the max-min throughput globally edge coloring. I leveraged the upper and lower bound to accelerate the optimization of edge coloring. This dissertation paves a way regarding UAS networking in the integration of CPS and machine learning. The UAS networking can achieve outstanding performance in a decentralized architecture. Concurrently, this dissertation gives insights into UAS networking on a large scale. These are fundamental to integrating UAS and National Aerial System (NAS), critical to aviation in the operated and unmanned fields. The dissertation provides novel approaches for the promotion of UAS networking on a large scale. The proposed approaches extend the state-of-the-art of UAS networking in a decentralized architecture. All the alterations can contribute to the establishment of UAS networking with CPS

    Network Flow Optimization Using Reinforcement Learning

    Get PDF

    White Paper for Research Beyond 5G

    Get PDF
    The documents considers both research in the scope of evolutions of the 5G systems (for the period around 2025) and some alternative/longer term views (with later outcomes, or leading to substantial different design choices). This document reflects on four main system areas: fundamental theory and technology, radio and spectrum management; system design; and alternative concepts. The result of this exercise can be broken in two different strands: one focused in the evolution of technologies that are already ongoing development for 5G systems, but that will remain research areas in the future (with “more challenging” requirements and specifications); the other, highlighting technologies that are not really considered for deployment today, or that will be essential for addressing problems that are currently non-existing, but will become apparent when 5G systems begin their widespread deployment

    Machine learning enabled millimeter wave cellular system and beyond

    Get PDF
    Millimeter-wave (mmWave) communication with advantages of abundant bandwidth and immunity to interference has been deemed a promising technology for the next generation network and beyond. With the help of mmWave, the requirements envisioned of the future mobile network could be met, such as addressing the massive growth required in coverage, capacity as well as traffic, providing a better quality of service and experience to users, supporting ultra-high data rates and reliability, and ensuring ultra-low latency. However, due to the characteristics of mmWave, such as short transmission distance, high sensitivity to the blockage, and large propagation path loss, there are some challenges for mmWave cellular network design. In this context, to enjoy the benefits from the mmWave networks, the architecture of next generation cellular network will be more complex. With a more complex network, it comes more complex problems. The plethora of possibilities makes planning and managing a complex network system more difficult. Specifically, to provide better Quality of Service and Quality of Experience for users in the such network, how to provide efficient and effective handover for mobile users is important. The probability of handover trigger will significantly increase in the next generation network, due to the dense small cell deployment. Since the resources in the base station (BS) is limited, the handover management will be a great challenge. Further, to generate the maximum transmission rate for the users, Line-of-sight (LOS) channel would be the main transmission channel. However, due to the characteristics of mmWave and the complexity of the environment, LOS channel is not feasible always. Non-line-of-sight channel should be explored and used as the backup link to serve the users. With all the problems trending to be complex and nonlinear, and the data traffic dramatically increasing, the conventional method is not effective and efficiency any more. In this case, how to solve the problems in the most efficient manner becomes important. Therefore, some new concepts, as well as novel technologies, require to be explored. Among them, one promising solution is the utilization of machine learning (ML) in the mmWave cellular network. On the one hand, with the aid of ML approaches, the network could learn from the mobile data and it allows the system to use adaptable strategies while avoiding unnecessary human intervention. On the other hand, when ML is integrated in the network, the complexity and workload could be reduced, meanwhile, the huge number of devices and data could be efficiently managed. Therefore, in this thesis, different ML techniques that assist in optimizing different areas in the mmWave cellular network are explored, in terms of non-line-of-sight (NLOS) beam tracking, handover management, and beam management. To be specific, first of all, a procedure to predict the angle of arrival (AOA) and angle of departure (AOD) both in azimuth and elevation in non-line-of-sight mmWave communications based on a deep neural network is proposed. Moreover, along with the AOA and AOD prediction, a trajectory prediction is employed based on the dynamic window approach (DWA). The simulation scenario is built with ray tracing technology and generate data. Based on the generated data, there are two deep neural networks (DNNs) to predict AOA/AOD in the azimuth (AAOA/AAOD) and AOA/AOD in the elevation (EAOA/EAOD). Furthermore, under an assumption that the UE mobility and the precise location is unknown, UE trajectory is predicted and input into the trained DNNs as a parameter to predict the AAOA/AAOD and EAOA/EAOD to show the performance under a realistic assumption. The robustness of both procedures is evaluated in the presence of errors and conclude that DNN is a promising tool to predict AOA and AOD in a NLOS scenario. Second, a novel handover scheme is designed aiming to optimize the overall system throughput and the total system delay while guaranteeing the quality of service (QoS) of each user equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the reinforcement learning (RL) algorithm and optimization theory. An RL algorithm known as multi-agent proximal policy optimization (MAPPO) plays a role in determining handover trigger conditions. Further, an optimization problem is proposed in conjunction with MAPPO to select the target base station and determine beam selection. It aims to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. Third, a multi-agent RL-based beam management scheme is proposed, where multiagent deep deterministic policy gradient (MADDPG) is applied on each small-cell base station (SCBS) to maximize the system throughput while guaranteeing the quality of service. With MADDPG, smart beam management methods can serve the UEs more efficiently and accurately. Specifically, the mobility of UEs causes the dynamic changes of the network environment, the MADDPG algorithm learns the experience of these changes. Based on that, the beam management in the SCBS is optimized according the reward or penalty when severing different UEs. The approach could improve the overall system throughput and delay performance compared with traditional beam management methods. The works presented in this thesis demonstrate the potentiality of ML when addressing the problem from the mmWave cellular network. Moreover, it provides specific solutions for optimizing NLOS beam tracking, handover management and beam management. For NLOS beam tracking part, simulation results show that the prediction errors of the AOA and AOD can be maintained within an acceptable range of ±2. Further, when it comes to the handover optimization part, the numerical results show the system throughput and delay are improved by 10% and 25%, respectively, when compared with two typical RL algorithms, Deep Deterministic Policy Gradient (DDPG) and Deep Q-learning (DQL). Lastly, when it considers the intelligent beam management part, numerical results reveal the convergence performance of the MADDPG and the superiority in improving the system throughput compared with other typical RL algorithms and the traditional beam management method

    The Cloud-to-Thing Continuum

    Get PDF
    The Internet of Things offers massive societal and economic opportunities while at the same time significant challenges, not least the delivery and management of the technical infrastructure underpinning it, the deluge of data generated from it, ensuring privacy and security, and capturing value from it. This Open Access Pivot explores these challenges, presenting the state of the art and future directions for research but also frameworks for making sense of this complex area. This book provides a variety of perspectives on how technology innovations such as fog, edge and dew computing, 5G networks, and distributed intelligence are making us rethink conventional cloud computing to support the Internet of Things. Much of this book focuses on technical aspects of the Internet of Things, however, clear methodologies for mapping the business value of the Internet of Things are still missing. We provide a value mapping framework for the Internet of Things to address this gap. While there is much hype about the Internet of Things, we have yet to reach the tipping point. As such, this book provides a timely entrée for higher education educators, researchers and students, industry and policy makers on the technologies that promise to reshape how society interacts and operates

    A Cognitive Routing framework for Self-Organised Knowledge Defined Networks

    Get PDF
    This study investigates the applicability of machine learning methods to the routing protocols for achieving rapid convergence in self-organized knowledge-defined networks. The research explores the constituents of the Self-Organized Networking (SON) paradigm for 5G and beyond, aiming to design a routing protocol that complies with the SON requirements. Further, it also exploits a contemporary discipline called Knowledge-Defined Networking (KDN) to extend the routing capability by calculating the “Most Reliable” path than the shortest one. The research identifies the potential key areas and possible techniques to meet the objectives by surveying the state-of-the-art of the relevant fields, such as QoS aware routing, Hybrid SDN architectures, intelligent routing models, and service migration techniques. The design phase focuses primarily on the mathematical modelling of the routing problem and approaches the solution by optimizing at the structural level. The work contributes Stochastic Temporal Edge Normalization (STEN) technique which fuses link and node utilization for cost calculation; MRoute, a hybrid routing algorithm for SDN that leverages STEN to provide constant-time convergence; Most Reliable Route First (MRRF) that uses a Recurrent Neural Network (RNN) to approximate route-reliability as the metric of MRRF. Additionally, the research outcomes include a cross-platform SDN Integration framework (SDN-SIM) and a secure migration technique for containerized services in a Multi-access Edge Computing environment using Distributed Ledger Technology. The research work now eyes the development of 6G standards and its compliance with Industry-5.0 for enhancing the abilities of the present outcomes in the light of Deep Reinforcement Learning and Quantum Computing

    5G Outlook – Innovations and Applications

    Get PDF
    5G Outlook - Innovations and Applications is a collection of the recent research and development in the area of the Fifth Generation Mobile Technology (5G), the future of wireless communications. Plenty of novel ideas and knowledge of the 5G are presented in this book as well as divers applications from health science to business modeling. The authors of different chapters contributed from various countries and organizations. The chapters have also been presented at the 5th IEEE 5G Summit held in Aalborg on July 1, 2016. The book starts with a comprehensive introduction on 5G and its need and requirement. Then millimeter waves as a promising spectrum to 5G technology is discussed. The book continues with the novel and inspiring ideas for the future wireless communication usage and network. Further, some technical issues in signal processing and network design for 5G are presented. Finally, the book ends up with different applications of 5G in distinct areas. Topics widely covered in this book are: • 5G technology from past to present to the future• Millimeter- waves and their characteristics• Signal processing and network design issues for 5G• Applications, business modeling and several novel ideas for the future of 5
    corecore