41 research outputs found

    Collaborative Distributed Q-Learning for RACH Congestion Minimization in Cellular IoT Networks

    Get PDF
    Due to infrequent and massive concurrent access requests from the ever-increasing number of machine-type communication (MTC) devices, the existing contention-based random access (RA) protocols, such as slotted ALOHA, suffer from the severe problem of random access channel (RACH) congestion in emerging cellular IoT networks. To address this issue, we propose a novel collaborative distributed Q-learning mechanism for the resource-constrained MTC devices in order to enable them to find unique RA slots for their transmissions so that the number of possible collisions can be significantly reduced. In contrast to the independent Q-learning scheme, the proposed approach utilizes the congestion level of RA slots as the global cost during the learning process and thus can notably lower the learning time for the low-end MTC devices. Our results show that the proposed learning scheme can significantly minimize the RACH congestion in cellular IoT networks

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC) and Ultra-Reliable and Low Latency Communications (URLLC), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include Quality of Service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenario along with the recent advances towards enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions

    BLER-based Adaptive Q-learning for Efficient Random Access in NOMA-based mMTC Networks

    Get PDF
    The ever-increasing number of machine-type communications (MTC) devices and the limited available radio resources are leading to a crucial issue of radio access network (RAN) congestion in upcoming 5G and beyond wireless networks. Thus, it is crucial to investigate novel techniques to minimize RAN congestion in massive MTC (mMTC) networks while taking the underlying short-packet communications (SPC) into account. In this paper, we propose an adaptive Q-learning (AQL) algorithm based on block error rate (BLER), an important metric in SPC, for a non-orthogonal multiple access (NOMA) based mMTC system. The proposed method aims to efficiently accommodate MTC devices to the available random access (RA) slots in order to significantly reduce the possible collisions, and subsequently to enhance the system throughput. Furthermore, in order to obtain more practical insights on the system design, the scenario of imperfect successive interference cancellation (ISIC) is considered as compared to the widely-used perfect SIC assumption. The performance of the proposed AQL method is compared with the recent Q-learning solutions in the literature in terms of system throughput over a range of parameters such as the number of devices, blocklength, and residual interference caused by ISIC, along with its convergence evaluation. Our simulation results illustrate the superiority of the proposed method over the existing techniques, in the scenarios where the number of devices is higher than the number of available RA time-slots

    Cellular, Wide-Area, and Non-Terrestrial IoT: A Survey on 5G Advances and the Road Towards 6G

    Full text link
    The next wave of wireless technologies is proliferating in connecting things among themselves as well as to humans. In the era of the Internet of things (IoT), billions of sensors, machines, vehicles, drones, and robots will be connected, making the world around us smarter. The IoT will encompass devices that must wirelessly communicate a diverse set of data gathered from the environment for myriad new applications. The ultimate goal is to extract insights from this data and develop solutions that improve quality of life and generate new revenue. Providing large-scale, long-lasting, reliable, and near real-time connectivity is the major challenge in enabling a smart connected world. This paper provides a comprehensive survey on existing and emerging communication solutions for serving IoT applications in the context of cellular, wide-area, as well as non-terrestrial networks. Specifically, wireless technology enhancements for providing IoT access in fifth-generation (5G) and beyond cellular networks, and communication networks over the unlicensed spectrum are presented. Aligned with the main key performance indicators of 5G and beyond 5G networks, we investigate solutions and standards that enable energy efficiency, reliability, low latency, and scalability (connection density) of current and future IoT networks. The solutions include grant-free access and channel coding for short-packet communications, non-orthogonal multiple access, and on-device intelligence. Further, a vision of new paradigm shifts in communication networks in the 2030s is provided, and the integration of the associated new technologies like artificial intelligence, non-terrestrial networks, and new spectra is elaborated. Finally, future research directions toward beyond 5G IoT networks are pointed out.Comment: Submitted for review to IEEE CS&

    A Systematic Review of LPWAN and Short-Range Network using AI to Enhance Internet of Things

    Get PDF
    Artificial intelligence (AI) has recently been used frequently, especially concerning the Internet of Things (IoT). However, IoT devices cannot work alone, assisted by Low Power Wide Area Network (LPWAN) for long-distance communication and Short-Range Network for a short distance. However, few reviews about AI can help LPWAN and Short-Range Network. Therefore, the author took the opportunity to do this review. This study aims to review LPWAN and Short-Range Networks AI papers in systematically enhancing IoT performance. Reviews are also used to systematically maximize LPWAN systems and Short-Range networks to enhance IoT quality and discuss results that can be applied to a specific scope. The author utilizes selected reporting items for systematic review and meta-analysis (PRISMA). The authors conducted a systematic review of all study results in support of the authors' objectives. Also, the authors identify development and related study opportunities. The author found 79 suitable papers in this systematic review, so a discussion of the presented papers was carried out. Several technologies are widely used, such as LPWAN in general, with several papers originating from China. Many reports from conferences last year and papers related to this matter were from 2020-2021. The study is expected to inspire experimental studies in finding relevant scientific papers and become another review

    URLLC for 5G and Beyond: Requirements, Enabling Incumbent Technologies and Network Intelligence

    Get PDF
    The tactile internet (TI) is believed to be the prospective advancement of the internet of things (IoT), comprising human-to-machine and machine-to-machine communication. TI focuses on enabling real-time interactive techniques with a portfolio of engineering, social, and commercial use cases. For this purpose, the prospective 5{th} generation (5G) technology focuses on achieving ultra-reliable low latency communication (URLLC) services. TI applications require an extraordinary degree of reliability and latency. The 3{rd} generation partnership project (3GPP) defines that URLLC is expected to provide 99.99% reliability of a single transmission of 32 bytes packet with a latency of less than one millisecond. 3GPP proposes to include an adjustable orthogonal frequency division multiplexing (OFDM) technique, called 5G new radio (5G NR), as a new radio access technology (RAT). Whereas, with the emergence of a novel physical layer RAT, the need for the design for prospective next-generation technologies arises, especially with the focus of network intelligence. In such situations, machine learning (ML) techniques are expected to be essential to assist in designing intelligent network resource allocation protocols for 5G NR URLLC requirements. Therefore, in this survey, we present a possibility to use the federated reinforcement learning (FRL) technique, which is one of the ML techniques, for 5G NR URLLC requirements and summarizes the corresponding achievements for URLLC. We provide a comprehensive discussion of MAC layer channel access mechanisms that enable URLLC in 5G NR for TI. Besides, we identify seven very critical future use cases of FRL as potential enablers for URLLC in 5G NR

    Novel Reinforcement Learning based Power Control and Subchannel Selection Mechanism for Grant-Free NOMA URLLC-Enabled Systems

    Get PDF
    Reducing waiting time due to scheduling process and exploiting multi-access transmission, grant-free non-orthogonal multiple access (GF-NOMA) has been considered as a promising access technology for URLLC-enabled 5G system with strict requirements on reliability and latency. However, GF-NOMAbased systems can suffer from severe interference caused by the grant-free (GF) access manner which may degrade the system performance and violate the URLLC-related requirements. To overcome this issue, the paper proposes a novel reinforcementlearning (RL)-based random access (RA) protocol based on which each device can learn from the previous decision and its corresponding performance to select the best subchannels and transmit power level for data transmission to avoid strong cross-interference. The learning-based framework is developed to maximize the system access efficiency which is defined as the ratio between the number of successful transmissions and the number of subchannels. Simulation results show that our proposed framework can improve the system access efficiency significantly in overloaded scenarios
    corecore