1,399 research outputs found

    Resource Management and Backhaul Routing in Millimeter-Wave IAB Networks Using Deep Reinforcement Learning

    Get PDF
    Thesis (PhD (Electronic Engineering))--University of Pretoria, 2023..The increased densification of wireless networks has led to the development of integrated access and backhaul (IAB) networks. In this thesis, deep reinforcement learning was applied to solve resource management and backhaul routing problems in millimeter-wave IAB networks. In the research work, a resource management solution that aims to avoid congestion for access users in an IAB network was proposed and implemented. The proposed solution applies deep reinforcement learning to learn an optimized policy that aims to achieve effective resource allocation whilst minimizing congestion and satisfying the user requirements. In addition, a deep reinforcement learning-based backhaul adaptation strategy that leverages a recursive discrete choice model was implemented in simulation. Simulation results where the proposed algorithms were compared with two baseline methods showed that the proposed scheme provides better throughput and delay performance.Sentech Chair in Broadband Wireless Multimedia Communications.Electrical, Electronic and Computer EngineeringPhD (Electronic Engineering)Unrestricte

    A Survey on Resource Allocation in Vehicular Networks

    Get PDF
    Vehicular networks, an enabling technology for Intelligent Transportation System (ITS), smart cities, and autonomous driving, can deliver numerous on-board data services, e.g., road-safety, easy navigation, traffic efficiency, comfort driving, infotainment, etc. Providing satisfactory Quality of Service (QoS) in vehicular networks, however, is a challenging task due to a number of limiting factors such as erroneous and congested wireless channels (due to high mobility or uncoordinated channel-access), increasingly fragmented and congested spectrum, hardware imperfections, and anticipated growth of vehicular communication devices. Therefore, it will be critical to allocate and utilize the available wireless network resources in an ultra-efficient manner. In this paper, we present a comprehensive survey on resource allocation schemes for the two dominant vehicular network technologies, e.g. Dedicated Short Range Communications (DSRC) and cellular based vehicular networks. We discuss the challenges and opportunities for resource allocations in modern vehicular networks and outline a number of promising future research directions

    Data-driven Integrated Sensing and Communication: Recent Advances, Challenges, and Future Prospects

    Full text link
    Integrated Sensing and Communication (ISAC), combined with data-driven approaches, has emerged as a highly significant field, garnering considerable attention from academia and industry. Its potential to enable wide-scale applications in the future sixth-generation (6G) networks has led to extensive recent research efforts. Machine learning (ML) techniques, including KK-nearest neighbors (KNN), support vector machines (SVM), deep learning (DL) architectures, and reinforcement learning (RL) algorithms, have been deployed to address various design aspects of ISAC and its diverse applications. Therefore, this paper aims to explore integrating various ML techniques into ISAC systems, covering various applications. These applications span intelligent vehicular networks, encompassing unmanned aerial vehicles (UAVs) and autonomous cars, as well as radar applications, localization and tracking, millimeter wave (mmWave) and Terahertz (THz) communication, and beamforming. The contributions of this paper lie in its comprehensive survey of ML-based works in the ISAC domain and its identification of challenges and future research directions. By synthesizing the existing knowledge and proposing new research avenues, this survey serves as a valuable resource for researchers, practitioners, and stakeholders involved in advancing the capabilities of ISAC systems in the context of 6G networks.Comment: ISAC-ML surve

    Machine Learning in Wireless Sensor Networks for Smart Cities:A Survey

    Get PDF
    Artificial intelligence (AI) and machine learning (ML) techniques have huge potential to efficiently manage the automated operation of the internet of things (IoT) nodes deployed in smart cities. In smart cities, the major IoT applications are smart traffic monitoring, smart waste management, smart buildings and patient healthcare monitoring. The small size IoT nodes based on low power Bluetooth (IEEE 802.15.1) standard and wireless sensor networks (WSN) (IEEE 802.15.4) standard are generally used for transmission of data to a remote location using gateways. The WSN based IoT (WSN-IoT) design problems include network coverage and connectivity issues, energy consumption, bandwidth requirement, network lifetime maximization, communication protocols and state of the art infrastructure. In this paper, the authors propose machine learning methods as an optimization tool for regular WSN-IoT nodes deployed in smart city applications. As per the author’s knowledge, this is the first in-depth literature survey of all ML techniques in the field of low power consumption WSN-IoT for smart cities. The results of this unique survey article show that the supervised learning algorithms have been most widely used (61%) as compared to reinforcement learning (27%) and unsupervised learning (12%) for smart city applications

    User mobility prediction and management using machine learning

    Get PDF
    The next generation mobile networks (NGMNs) are envisioned to overcome current user mobility limitations while improving the network performance. Some of the limitations envisioned for mobility management in the future mobile networks are: addressing the massive traffic growth bottlenecks; providing better quality and experience to end users; supporting ultra high data rates; ensuring ultra low latency, seamless handover (HOs) from one base station (BS) to another, etc. Thus, in order for future networks to manage users mobility through all of the stringent limitations mentioned, artificial intelligence (AI) is deemed to play a key role automating end-to-end process through machine learning (ML). The objectives of this thesis are to explore user mobility predictions and management use-cases using ML. First, background and literature review is presented which covers, current mobile networks overview, and ML-driven applications to enable user’s mobility and management. Followed by the use-cases of mobility prediction in dense mobile networks are analysed and optimised with the use of ML algorithms. The overall framework test accuracy of 91.17% was obtained in comparison to all other mobility prediction algorithms through artificial neural network (ANN). Furthermore, a concept of mobility prediction-based energy consumption is discussed to automate and classify user’s mobility and reduce carbon emissions under smart city transportation achieving 98.82% with k-nearest neighbour (KNN) classifier as an optimal result along with 31.83% energy savings gain. Finally, context-aware handover (HO) skipping scenario is analysed in order to improve over all quality of service (QoS) as a framework of mobility management in next generation networks (NGNs). The framework relies on passenger mobility, trains trajectory, travelling time and frequency, network load and signal ratio data in cardinal directions i.e, North, East, West, and South (NEWS) achieving optimum result of 94.51% through support vector machine (SVM) classifier. These results were fed into HO skipping techniques to analyse, coverage probability, throughput, and HO cost. This work is extended by blockchain-enabled privacy preservation mechanism to provide end-to-end secure platform throughout train passengers mobility

    Toward 6G TKμ\mu Extreme Connectivity: Architecture, Key Technologies and Experiments

    Full text link
    Sixth-generation (6G) networks are evolving towards new features and order-of-magnitude enhancement of systematic performance metrics compared to the current 5G. In particular, the 6G networks are expected to achieve extreme connectivity performance with Tbps-scale data rate, Kbps/Hz-scale spectral efficiency, and μ\mus-scale latency. To this end, an original three-layer 6G network architecture is designed to realise uniform full-spectrum cell-free radio access and provide task-centric agile proximate support for diverse applications. The designed architecture is featured by super edge node (SEN) which integrates connectivity, computing, AI, data, etc. On this basis, a technological framework of pervasive multi-level (PML) AI is established in the centralised unit to enable task-centric near-real-time resource allocation and network automation. We then introduce a radio access network (RAN) architecture of full spectrum uniform cell-free networks, which is among the most attractive RAN candidates for 6G TKμ\mu extreme connectivity. A few most promising key technologies, i.e., cell-free massive MIMO, photonics-assisted Terahertz wireless access and spatiotemporal two-dimensional channel coding are further discussed. A testbed is implemented and extensive trials are conducted to evaluate innovative technologies and methodologies. The proposed 6G network architecture and technological framework demonstrate exciting potentials for full-service and full-scenario applications.Comment: 15 pages, 12 figure

    Unmanned Aerial Vehicle (UAV)-Enabled Wireless Communications and Networking

    Get PDF
    The emerging massive density of human-held and machine-type nodes implies larger traffic deviatiolns in the future than we are facing today. In the future, the network will be characterized by a high degree of flexibility, allowing it to adapt smoothly, autonomously, and efficiently to the quickly changing traffic demands both in time and space. This flexibility cannot be achieved when the network’s infrastructure remains static. To this end, the topic of UAVs (unmanned aerial vehicles) have enabled wireless communications, and networking has received increased attention. As mentioned above, the network must serve a massive density of nodes that can be either human-held (user devices) or machine-type nodes (sensors). If we wish to properly serve these nodes and optimize their data, a proper wireless connection is fundamental. This can be achieved by using UAV-enabled communication and networks. This Special Issue addresses the many existing issues that still exist to allow UAV-enabled wireless communications and networking to be properly rolled out

    Joint Optimization of Resource Allocation and User Association in Multi-Frequency Cellular Networks Assisted by RIS

    Full text link
    Due to the development of communication technology and the rise of user network demand, a reasonable resource allocation for wireless networks is the key to guaranteeing regular operation and improving system performance. Various frequency bands exist in the natural network environment, and heterogeneous cellular network (HCN) has become a hot topic for current research. Meanwhile, Reconfigurable Intelligent Surface (RIS) has become a key technology for developing next-generation wireless networks. By modifying the phase of the incident signal arriving at the RIS surface, RIS can improve the signal quality at the receiver and reduce co-channel interference. In this paper, we develop a RIS-assisted HCN model for a multi-base station (BS) multi-frequency network, which includes 4G, 5G, millimeter wave (mmwave), and terahertz networks, and considers the case of multiple network coverage users, which is more in line with the realistic network characteristics and the concept of 6G networks. We propose the optimization objective of maximizing the system sum rate, which is decomposed into two subproblems, i.e., the user resource allocation and the phase shift optimization problem of RIS components. Due to the NP-hard and coupling relationship, we use the block coordinate descent (BCD) method to alternately optimize the local solutions of the coalition game and the local discrete phase search algorithm to obtain the global solution. In contrast, most previous studies have used the coalition game algorithm to solve the resource allocation problem alone. Simulation results show that the algorithm performs better than the rest of the algorithms, effectively improves the system sum rate, and achieves performance close to the optimal solution of the traversal algorithm with low complexity.Comment: 18 page

    Trends in Intelligent Communication Systems: Review of Standards, Major Research Projects, and Identification of Research Gaps

    Get PDF
    The increasing complexity of communication systems, following the advent of heterogeneous technologies, services and use cases with diverse technical requirements, provide a strong case for the use of artificial intelligence (AI) and data-driven machine learning (ML) techniques in studying, designing and operating emerging communication networks. At the same time, the access and ability to process large volumes of network data can unleash the full potential of a network orchestrated by AI/ML to optimise the usage of available resources while keeping both CapEx and OpEx low. Driven by these new opportunities, the ongoing standardisation activities indicate strong interest to reap the benefits of incorporating AI and ML techniques in communication networks. For instance, 3GPP has introduced the network data analytics function (NWDAF) at the 5G core network for the control and management of network slices, and for providing predictive analytics, or statistics, about past events to other network functions, leveraging AI/ML and big data analytics. Likewise, at the radio access network (RAN), the O-RAN Alliance has already defined an architecture to infuse intelligence into the RAN, where closed-loop control models are classified based on their operational timescale, i.e., real-time, near real-time, and non-real-time RAN intelligent control (RIC). Different from the existing related surveys, in this review article, we group the major research studies in the design of model-aided ML-based transceivers following the breakdown suggested by the O-RAN Alliance. At the core and the edge networks, we review the ongoing standardisation activities in intelligent networking and the existing works cognisant of the architecture recommended by 3GPP and ETSI. We also review the existing trends in ML algorithms running on low-power micro-controller units, known as TinyML. We conclude with a summary of recent and currently funded projects on intelligent communications and networking. This review reveals that the telecommunication industry and standardisation bodies have been mostly focused on non-real-time RIC, data analytics at the core and the edge, AI-based network slicing, and vendor inter-operability issues, whereas most recent academic research has focused on real-time RIC. In addition, intelligent radio resource management and aspects of intelligent control of the propagation channel using reflecting intelligent surfaces have captured the attention of ongoing research projects
    • …
    corecore