1,403 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Cognitive networking for next generation of cellular communication systems

    Get PDF
    This thesis presents a comprehensive study of cognitive networking for cellular networks with contributions that enable them to be more dynamic, agile, and efficient. To achieve this, machine learning (ML) algorithms, a subset of artificial intelligence, are employed to bring such cognition to cellular networks. More specifically, three major branches of ML, namely supervised, unsupervised, and reinforcement learning (RL), are utilised for various purposes: unsupervised learning is used for data clustering, while supervised learning is employed for predictions on future behaviours of networks/users. RL, on the other hand, is utilised for optimisation purposes due to its inherent characteristics of adaptability and requiring minimal knowledge of the environment. Energy optimisation, capacity enhancement, and spectrum access are identified as primary design challenges for cellular networks given that they are envisioned to play crucial roles for 5G and beyond due to the increased demand in the number of connected devices as well as data rates. Each design challenge and its corresponding proposed solution are discussed thoroughly in separate chapters. Regarding energy optimisation, a user-side energy consumption is investigated by considering Internet of things (IoT) networks. An RL based intelligent model, which jointly optimises the wireless connection type and data processing entity, is proposed. In particular, a Q-learning algorithm is developed, through which the energy consumption of an IoT device is minimised while keeping the requirement of the applications--in terms of response time and security--satisfied. The proposed methodology manages to result in 0% normalised joint cost--where all the considered metrics are combined--while the benchmarks performed 54.84% on average. Next, the energy consumption of radio access networks (RANs) is targeted, and a traffic-aware cell switching algorithm is designed to reduce the energy consumption of a RAN without compromising on the user quality-of-service (QoS). The proposed technique employs a SARSA algorithm with value function approximation, since the conventional RL methods struggle with solving problems with huge state spaces. The results reveal that up to 52% gain on the total energy consumption is achieved with the proposed technique, and the gain is observed to reduce when the scenario becomes more realistic. On the other hand, capacity enhancement is studied from two different perspectives, namely mobility management and unmanned aerial vehicle (UAV) assistance. Towards that end, a predictive handover (HO) mechanism is designed for mobility management in cellular networks by identifying two major issues of Markov chains based HO predictions. First, revisits--which are defined as a situation whereby a user visits the same cell more than once within the same day--are diagnosed as causing similar transition probabilities, which in turn increases the likelihood of making incorrect predictions. This problem is addressed with a structural change; i.e., rather than storing 2-D transition matrix, it is proposed to store 3-D one that also includes HO orders. The obtained results show that 3-D transition matrix is capable of reducing the HO signalling cost by up to 25.37%, which is observed to drop with increasing randomness level in the data set. Second, making a HO prediction with insufficient criteria is identified as another issue with the conventional Markov chains based predictors. Thus, a prediction confidence level is derived, such that there should be a lower bound to perform HO predictions, which are not always advantageous owing to the HO signalling cost incurred from incorrect predictions. The outcomes of the simulations confirm that the derived confidence level mechanism helps in improving the prediction accuracy by up to 8.23%. Furthermore, still considering capacity enhancement, a UAV assisted cellular networking is considered, and an unsupervised learning-based UAV positioning algorithm is presented. A comprehensive analysis is conducted on the impacts of the overlapping footprints of multiple UAVs, which are controlled by their altitudes. The developed k-means clustering based UAV positioning approach is shown to reduce the number of users in outage by up to 80.47% when compared to the benchmark symmetric deployment. Lastly, a QoS-aware dynamic spectrum access approach is developed in order to tackle challenges related to spectrum access, wherein all the aforementioned types of ML methods are employed. More specifically, by leveraging future traffic load predictions of radio access technologies (RATs) and Q-learning algorithm, a novel proactive spectrum sensing technique is introduced. As such, two different sensing strategies are developed; the first one focuses solely on sensing latency reduction, while the second one jointly optimises sensing latency and user requirements. In particular, the proposed Q-learning algorithm takes the future load predictions of the RATs and the requirements of secondary users--in terms of mobility and bandwidth--as inputs and directs the users to the spectrum of the optimum RAT to perform sensing. The strategy to be employed can be selected based on the needs of the applications, such that if the latency is the only concern, the first strategy should be selected due to the fact that the second strategy is computationally more demanding. However, by employing the second strategy, sensing latency is reduced while satisfying other user requirements. The simulation results demonstrate that, compared to random sensing, the first strategy decays the sensing latency by 85.25%, while the second strategy enhances the full-satisfaction rate, where both mobility and bandwidth requirements of the user are simultaneously satisfied, by 95.7%. Therefore, as it can be observed, three key design challenges of the next generation of cellular networks are identified and addressed via the concept of cognitive networking, providing a utilitarian tool for mobile network operators to plug into their systems. The proposed solutions can be generalised to various network scenarios owing to the sophisticated ML implementations, which renders the solutions both practical and sustainable

    Secure and Efficient Dynamic Spectrum Access Solutions for Future Wireless Networks

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.The future wireless communication network (5G and beyond) is expected to provide many advantages, such as an extremely high peak rate, ultralow latency and less energy consumption. However, since an extremely large number of connecting devices will be deployed, the demand for the spectrum will also be growing exponentially, causing a problem of spectrum shortage. To effectively address the spectrum crunch, dynamic spectrum access (DSA), including both sensing-based and database-driven DSA, has been proposed. In this thesis, we investigate critical challenges in DSA, including the efficiency in sensing-based techniques and privacy in database-driven techniques. First, to improve the sensing performance of the sensing-based DSA in half-duplex (HD) systems, we propose two sensing approaches leveraging the property of deep learning networks. Our solutions are significantly superior in terms of the robustness to noise uncertainty, timing delay, and carrier frequency offset (CFO), compared to conventional sensing methods. Moreover, our work does not require any prior information of signals, which however is essential for the traditional sensing methods. Second, to improve the sensing performance of sensing-based DSA in full-duplex (FD) systems, we develop two novel sensing methods using the features of orthogonal frequency division multiplexing (OFDM) signals. The developed sensing approaches are robust to not only residual SI but also timing delay or CFO. We also obtain the closed-form expressions of the probability of detection and false alarm for our approaches. Third, to protect the users' privacy in the database-driven DSA, we develop two schemes to protect the operational privacy of Incumbent Users (IUs) and honest/dishonest Secondary Users (SUs). To implement our proposed work, we introduce an interference calculation scheme that allows users to calculate an interference budget without revealing operational information. It also reduces the computing overhead of our developed approaches. Additionally, we propose a “punishment and forgiveness” mechanism to encourage dishonest SUs to provide truthful information. Theoretical analysis and extensive simulations show that our proposed schemes can better protect all users’ operational privacy under various privacy attacks, yielding higher spectrum utilization with less online overhead, compared with state of the art approaches

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Cooperative Radio Communications for Green Smart Environments

    Get PDF
    The demand for mobile connectivity is continuously increasing, and by 2020 Mobile and Wireless Communications will serve not only very dense populations of mobile phones and nomadic computers, but also the expected multiplicity of devices and sensors located in machines, vehicles, health systems and city infrastructures. Future Mobile Networks are then faced with many new scenarios and use cases, which will load the networks with different data traffic patterns, in new or shared spectrum bands, creating new specific requirements. This book addresses both the techniques to model, analyse and optimise the radio links and transmission systems in such scenarios, together with the most advanced radio access, resource management and mobile networking technologies. This text summarises the work performed by more than 500 researchers from more than 120 institutions in Europe, America and Asia, from both academia and industries, within the framework of the COST IC1004 Action on "Cooperative Radio Communications for Green and Smart Environments". The book will have appeal to graduates and researchers in the Radio Communications area, and also to engineers working in the Wireless industry. Topics discussed in this book include: • Radio waves propagation phenomena in diverse urban, indoor, vehicular and body environments• Measurements, characterization, and modelling of radio channels beyond 4G networks• Key issues in Vehicle (V2X) communication• Wireless Body Area Networks, including specific Radio Channel Models for WBANs• Energy efficiency and resource management enhancements in Radio Access Networks• Definitions and models for the virtualised and cloud RAN architectures• Advances on feasible indoor localization and tracking techniques• Recent findings and innovations in antenna systems for communications• Physical Layer Network Coding for next generation wireless systems• Methods and techniques for MIMO Over the Air (OTA) testin

    Temperature aware power optimization for multicore floating-point units

    Full text link

    Unmanned Aerial Vehicle (UAV)-Enabled Wireless Communications and Networking

    Get PDF
    The emerging massive density of human-held and machine-type nodes implies larger traffic deviatiolns in the future than we are facing today. In the future, the network will be characterized by a high degree of flexibility, allowing it to adapt smoothly, autonomously, and efficiently to the quickly changing traffic demands both in time and space. This flexibility cannot be achieved when the network’s infrastructure remains static. To this end, the topic of UAVs (unmanned aerial vehicles) have enabled wireless communications, and networking has received increased attention. As mentioned above, the network must serve a massive density of nodes that can be either human-held (user devices) or machine-type nodes (sensors). If we wish to properly serve these nodes and optimize their data, a proper wireless connection is fundamental. This can be achieved by using UAV-enabled communication and networks. This Special Issue addresses the many existing issues that still exist to allow UAV-enabled wireless communications and networking to be properly rolled out

    Mobility management in multi-RAT multiI-band heterogeneous networks

    Get PDF
    Support for user mobility is the raison d'etre of mobile cellular networks. However, mounting pressure for more capacity is leading to adaption of multi-band multi-RAT ultra-dense network design, particularly with the increased use of mmWave based small cells. While such design for emerging cellular networks is expected to offer manyfold more capacity, it gives rise to a new set of challenges in user mobility management. Among others, frequent handovers (HO) and thus higher impact of poor mobility management on quality of user experience (QoE) as well as link capacity, lack of an intelligent solution to manage dual connectivity (of user with both 4G and 5G cells) activation/deactivation, and mmWave cell discovery are the most critical challenges. In this dissertation, I propose and evaluate a set of solutions to address the aforementioned challenges. The beginning outcome of our investigations into the aforementioned problems is the first ever taxonomy of mobility related 3GPP defined network parameters and Key Performance Indicators (KPIs) followed by a tutorial on 3GPP-based 5G mobility management procedures. The first major contribution of the thesis here is a novel framework to characterize the relationship between the 28 critical mobility-related network parameters and 8 most vital KPIs. A critical hurdle in addressing all mobility related challenges in emerging networks is the complexity of modeling realistic mobility and HO process. Mathematical models are not suitable here as they cannot capture the dynamics as well as the myriad parameters and KPIs involved. Existing simulators also mostly either omit or overly abstract the HO and user mobility, chiefly because the problems caused by poor HO management had relatively less impact on overall performance in legacy networks as they were not multi-RAT multi-band and therefore incurred much smaller number of HOs compared to emerging networks. The second key contribution of this dissertation is development of a first of its kind system level simulator, called SyntheticNET that can help the research community in overcoming the hurdle of realistic mobility and HO process modeling. SyntheticNET is the very first python-based simulator that fully conforms to 3GPP Release 15 5G standard. Compared to the existing simulators, SyntheticNET includes a modular structure, flexible propagation modeling, adaptive numerology, realistic mobility patterns, and detailed HO evaluation criteria. SyntheticNET’s python-based platform allows the effective application of Artificial Intelligence (AI) to various network functionalities. Another key challenge in emerging multi-RAT technologies is the lack of an intelligent solution to manage dual connectivity with 4G as well 5G cell needed by a user to access 5G infrastructure. The 3rd contribution of this thesis is a solution to address this challenge. I present a QoE-aware E-UTRAN New Radio-Dual Connectivity (EN-DC) activation scheme where AI is leveraged to develop a model that can accurately predict radio link failure (RLF) and voice muting using the low-level measurements collected from a real network. The insights from the AI based RLF and mute prediction models are then leveraged to configure sets of 3GPP parameters to maximize EN-DC activation while keeping the QoE-affecting RLF and mute anomalies to minimum. The last contribution of this dissertation is a novel solution to address mmWave cell discovery problem. This problem stems from the highly directional nature of mmWave transmission. The proposed mmWave cell discovery scheme builds upon a joint search method where mmWave cells exploit an overlay coverage layer from macro cells sharing the UE location to the mmWave cell. The proposed scheme is made more practical by investigating and developing solutions for the data sparsity issue in model training. Ability to work with sparse data makes the proposed scheme feasible in realistic scenarios where user density is often not high enough to provide coverage reports from each bin of the coverage area. Simulation results show that the proposed scheme, efficiently activates EN-DC to a nearby mmWave 5G cell and thus substantially reduces the mmWave cell discovery failures compared to the state of the art cell discovery methods
    • …
    corecore