39 research outputs found

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Intelligent RACH Access strategies for M2M Traffic over Cellular Networks

    Get PDF
    This thesis investigates the coexistence of Machine-to-Machine (M2M) and Human-to-Human (H2H) based traffic sharing the Random Access Channel (RACH) of an existing cellular network and introduced a Q-learning as a mean of supporting the M2M traffic. The learning enables an intelligent slot selection strategy in order to avoid collisions amongst the M2M users during the RACH contest. It is also applied so that no central entity is involved in the slot selection process, to avoid tampering with the existing network standards. The thesis also introduces a novel back-off scheme for RACH access which provides separate frames for M2M and conventional cellular (H2H) retransmissions and is capable of dynamically adapting the frame size in order to maximise channel throughput. A Frame ALOHA for a Q-learning RACH access scheme is developed to realise collision- free RACH access between the H2H and M2M user groups. The scheme introduces a separate frame for H2H and M2M to use in both the first attempt and retransmissions. In addition analytical models are developed to examine the interaction of H2H and M2M traffic on the RACH channel, and to evaluate the throughput performance of both slotted ALOHA and Q-learning based access schemes. In general it is shown that Q-learning can be effectively applied for M2M traffic, significantly increasing the throughput capability of the channel with respect to conventional slotted ALOHA access. Dynamic adaptation of the back-off frames is shown to offer further improvements relative to a fixed frame scheme. Also the FA-QL-RACH scheme offers better performance than the QL-RACH and FB-QL-RACH scheme

    Deep Reinforcement Learning Mechanism for Dynamic Access Control in Wireless Networks Handling mMTC

    Full text link
    [EN] One important issue that needs to be addressed in order to provide effective massive deployments of IoT devices is access control. In 5G cellular networks, the Access Class Barring (ACB) method aims at increasing the total successful access probability by delaying randomly access requests. This mechanism can be controlled through the barring rate, which can be easily adapted in networks where Human-to-Human (H2H) communications are prevalent. However, in scenarios with massive deployments such as those found in IoT applications, it is not evident how this parameter should be set, and how it should adapt to dynamic traffic conditions. We propose a double deep reinforcement learning mechanism to adapt the barring rate of ACB under dynamic conditions. The algorithm is trained with simultaneous H2H and Machine-to-Machine (M2M) traffic, but we perform a separate performance evaluation for each type of traffic. The results show that our proposed mechanism is able to reach a successful access rate of 100 % for both H2H and M2M UEs and reduce the mean number of preamble transmissions while slightly affecting the mean access delay, even for scenarios with very high load. Also, its performance remains stable under the variation of different parameters. (C) 2019 Elsevier B.V. All rights reserved.The research of D. Pacheco-Paramo was supported by Universidad Sergio Arboleda, P.t. Tecnologias para la inclusion social y la competitividad economica. 0.E.6. The research of L Tello-Oquendo was conducted under project CONV.2018-ING010. Universidad Nacional de Chimborazo. The research of V. Pla and J. Martinez-Bauset was supported by Grant PGC2018-094151-B-I00 (MCIU/AEI/FEDER,UE).Pacheco-Paramo, DF.; Tello-Oquendo, L.; Pla, V.; Mart铆nez Bauset, J. (2019). Deep Reinforcement Learning Mechanism for Dynamic Access Control in Wireless Networks Handling mMTC. Ad Hoc Networks. 94:1-14. https://doi.org/10.1016/j.adhoc.2019.101939S1149

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC) and Ultra-Reliable and Low Latency Communications (URLLC), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include Quality of Service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenario along with the recent advances towards enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions

    Random Access Analysis for Massive IoT Networks Under a New Spatio-Temporal Model: A Stochastic Geometry Approach

    Get PDF
    Massive Internet of Things (mIoT) has provided an auspicious opportunity to build powerful and ubiquitous connections that faces a plethora of new challenges, where cellular networks are potential solutions due to their high scalability, reliability, and efficiency. The Random Access CHannel (RACH) procedure is the first step of connection establishment between IoT devices and Base Stations (BSs) in the cellular-based mIoT network, where modelling the interactions between static properties of physical layer network and dynamic properties of queue evolving in each IoT device are challenging. To tackle this, we provide a novel traffic-aware spatio-temporal model to analyze RACH in cellular-based mIoT networks, where the physical layer network is modelled and analyzed based on stochastic geometry in the spatial domain, and the queue evolution is analyzed based on probability theory in the time domain. For performance evaluation, we derive the exact expressions for the preamble transmission success probabilities of a randomly chosen IoT device with different RACH schemes in each time slot, which offer insights into effectiveness of each RACH scheme. Our derived analytical results are verified by the realistic simulations capturing the evolution of packets in each IoT device. This mathematical model and analytical framework can be applied to evaluate the performance of other types of RACH schemes in the cellular-based networks by simply integrating its preamble transmission principle

    Cellular networks for smart grid communication

    Get PDF
    The next-generation electric power system, known as smart grid, relies on a robust and reliable underlying communication infrastructure to improve the efficiency of electricity distribution. Cellular networks, e.g., LTE/LTE-A systems, appear as a promising technology to facilitate the smart grid evolution. Their inherent performance characteristics and well-established ecosystem could potentially unlock unprecedented use cases, enabling real-time and autonomous distribution grid operations. However, cellular technology was not originally intended for smart grid communication, associated with highly-reliable message exchange and massive device connectivity requirements. The fundamental differences between smart grid and human-type communication challenge the classical design of cellular networks and introduce important research questions that have not been sufficiently addressed so far. Motivated by these challenges, this doctoral thesis investigates novel radio access network (RAN) design principles and performance analysis for the seamless integration of smart grid traffic in future cellular networks. Specifically, we focus on addressing the fundamental RAN problems of network scalability in massive smart grid deployments and radio resource management for smart grid and human-type traffic. The main objective of the thesis lies on the design, analysis and performance evaluation of RAN mechanisms that would render cellular networks the key enabler for emerging smart grid applications. The first part of the thesis addresses the radio access limitations in LTE-based networks for reliable and scalable smart grid communication. We first identify the congestion problem in LTE random access that arises in large-scale smart grid deployments. To overcome this, a novel random access mechanism is proposed that can efficiently support real-time distribution automation services with negligible impact on the background traffic. Motivated by the stringent reliability requirements of various smart grid operations, we then develop an analytical model of the LTE random access procedure that allows us to assess the performance of event-based monitoring traffic under various load conditions and network configurations. We further extend our analysis to include the relation between the cell size and the availability of orthogonal random access resources and we identify an additional challenge for reliable smart grid connectivity. To this end, we devise an interference- and load-aware cell planning mechanism that enhances reliability in substation automation services. Finally, we couple the problem of state estimation in wide-area monitoring systems with the reliability challenges in information acquisition. Using our developed analytical framework, we quantify the impact of imperfect communication reliability in the state estimation accuracy and we provide useful insights for the design of reliability-aware state estimators. The second part of the thesis builds on the previous one and focuses on the RAN problem of resource scheduling and sharing for smart grid and human-type traffic. We introduce a novel scheduler that achieves low latency for distribution automation traffic while resource allocation is performed in a way that keeps the degradation of cellular users at a minimum level. In addition, we investigate the benefits of Device-to-Device (D2D) transmission mode for event-based message exchange in substation automation scenarios. We design a joint mode selection and resource allocation mechanism which results in higher data rates with respect to the conventional transmission mode via the base station. An orthogonal resource partition scheme between cellular and D2D links is further proposed to prevent the underutilization of the scarce cellular spectrum. The research findings of this thesis aim to deliver novel solutions to important RAN performance issues that arise when cellular networks support smart grid communication.Las redes celulares, p.e., los sistemas LTE/LTE-A, aparecen como una tecnolog铆a prometedora para facilitar la evoluci贸n de la pr贸xima generaci贸n del sistema el茅ctrico de potencia, conocido como smart grid (SG). Sin embargo, la tecnolog铆a celular no fue pensada originalmente para las comunicaciones en la SG, asociadas con el intercambio fiable de mensajes y con requisitos de conectividad de un n煤mero masivo de dispositivos. Las diferencias fundamentales entre las comunicaciones en la SG y la comunicaci贸n de tipo humano desaf铆an el dise帽o cl谩sico de las redes celulares e introducen importantes cuestiones de investigaci贸n que hasta ahora no se han abordado suficientemente. Motivada por estos retos, esta tesis doctoral investiga los principios de dise帽o y analiza el rendimiento de una nueva red de acceso radio (RAN) que permita una integraci贸n perfecta del tr谩fico de la SG en las redes celulares futuras. Nos centramos en los problemas fundamentales de escalabilidad de la RAN en despliegues de SG masivos, y en la gesti贸n de los recursos radio para la integraci贸n del tr谩fico de la SG con el tr谩fico de tipo humano. El objetivo principal de la tesis consiste en el dise帽o, el an谩lisis y la evaluaci贸n del rendimiento de los mecanismos de las RAN que convertir谩n a las redes celulares en el elemento clave para las aplicaciones emergentes de las SGs. La primera parte de la tesis aborda las limitaciones del acceso radio en redes LTE para la comunicaci贸n fiable y escalable en SGs. En primer lugar, identificamos el problema de congesti贸n en el acceso aleatorio de LTE que aparece en los despliegues de SGs a gran escala. Para superar este problema, se propone un nuevo mecanismo de acceso aleatorio que permite soportar de forma eficiente los servicios de automatizaci贸n de la distribuci贸n el茅ctrica en tiempo real, con un impacto insignificante en el tr谩fico de fondo. Motivados por los estrictos requisitos de fiabilidad de las diversas operaciones en la SG, desarrollamos un modelo anal铆tico del procedimiento de acceso aleatorio de LTE que nos permite evaluar el rendimiento del tr谩fico de monitorizaci贸n de la red el茅ctrica basado en eventos bajo diversas condiciones de carga y configuraciones de red. Adem谩s, ampliamos nuestro an谩lisis para incluir la relaci贸n entre el tama帽o de celda y la disponibilidad de recursos de acceso aleatorio ortogonales, e identificamos un reto adicional para la conectividad fiable en la SG. Con este fin, dise帽amos un mecanismo de planificaci贸n celular que tiene en cuenta las interferencias y la carga de la red, y que mejora la fiabilidad en los servicios de automatizaci贸n de las subestaciones el茅ctricas. Finalmente, combinamos el problema de la estimaci贸n de estado en sistemas de monitorizaci贸n de redes el茅ctricas de 谩rea amplia con los retos de fiabilidad en la adquisici贸n de la informaci贸n. Utilizando el modelo anal铆tico desarrollado, cuantificamos el impacto de la baja fiabilidad en las comunicaciones sobre la precisi贸n de la estimaci贸n de estado. La segunda parte de la tesis se centra en el problema de scheduling y compartici贸n de recursos en la RAN para el tr谩fico de SG y el tr谩fico de tipo humano. Presentamos un nuevo scheduler que proporciona baja latencia para el tr谩fico de automatizaci贸n de la distribuci贸n el茅ctrica, mientras que la asignaci贸n de recursos se realiza de un modo que mantiene la degradaci贸n de los usuarios celulares en un nivel m铆nimo. Adem谩s, investigamos los beneficios del modo de transmisi贸n Device-to-Device (D2D) en el intercambio de mensajes basados en eventos en escenarios de automatizaci贸n de subestaciones el茅ctricas. Dise帽amos un mecanismo conjunto de asignaci贸n de recursos y selecci贸n de modo que da como resultado tasas de datos m谩s elevadas con respecto al modo de transmisi贸n convencional a trav茅s de la estaci贸n base. Finalmente, se propone un esquema de partici贸n de recursos ortogonales entre enlaces celulares y D2Postprint (published version

    URLLC for 5G and Beyond: Requirements, Enabling Incumbent Technologies and Network Intelligence

    Get PDF
    The tactile internet (TI) is believed to be the prospective advancement of the internet of things (IoT), comprising human-to-machine and machine-to-machine communication. TI focuses on enabling real-time interactive techniques with a portfolio of engineering, social, and commercial use cases. For this purpose, the prospective 5{th} generation (5G) technology focuses on achieving ultra-reliable low latency communication (URLLC) services. TI applications require an extraordinary degree of reliability and latency. The 3{rd} generation partnership project (3GPP) defines that URLLC is expected to provide 99.99% reliability of a single transmission of 32 bytes packet with a latency of less than one millisecond. 3GPP proposes to include an adjustable orthogonal frequency division multiplexing (OFDM) technique, called 5G new radio (5G NR), as a new radio access technology (RAT). Whereas, with the emergence of a novel physical layer RAT, the need for the design for prospective next-generation technologies arises, especially with the focus of network intelligence. In such situations, machine learning (ML) techniques are expected to be essential to assist in designing intelligent network resource allocation protocols for 5G NR URLLC requirements. Therefore, in this survey, we present a possibility to use the federated reinforcement learning (FRL) technique, which is one of the ML techniques, for 5G NR URLLC requirements and summarizes the corresponding achievements for URLLC. We provide a comprehensive discussion of MAC layer channel access mechanisms that enable URLLC in 5G NR for TI. Besides, we identify seven very critical future use cases of FRL as potential enablers for URLLC in 5G NR
    corecore