18 research outputs found

    Deep Reinforcement Learning Mechanism for Dynamic Access Control in Wireless Networks Handling mMTC

    Full text link
    [EN] One important issue that needs to be addressed in order to provide effective massive deployments of IoT devices is access control. In 5G cellular networks, the Access Class Barring (ACB) method aims at increasing the total successful access probability by delaying randomly access requests. This mechanism can be controlled through the barring rate, which can be easily adapted in networks where Human-to-Human (H2H) communications are prevalent. However, in scenarios with massive deployments such as those found in IoT applications, it is not evident how this parameter should be set, and how it should adapt to dynamic traffic conditions. We propose a double deep reinforcement learning mechanism to adapt the barring rate of ACB under dynamic conditions. The algorithm is trained with simultaneous H2H and Machine-to-Machine (M2M) traffic, but we perform a separate performance evaluation for each type of traffic. The results show that our proposed mechanism is able to reach a successful access rate of 100 % for both H2H and M2M UEs and reduce the mean number of preamble transmissions while slightly affecting the mean access delay, even for scenarios with very high load. Also, its performance remains stable under the variation of different parameters. (C) 2019 Elsevier B.V. All rights reserved.The research of D. Pacheco-Paramo was supported by Universidad Sergio Arboleda, P.t. Tecnologias para la inclusion social y la competitividad economica. 0.E.6. The research of L Tello-Oquendo was conducted under project CONV.2018-ING010. Universidad Nacional de Chimborazo. The research of V. Pla and J. Martinez-Bauset was supported by Grant PGC2018-094151-B-I00 (MCIU/AEI/FEDER,UE).Pacheco-Paramo, DF.; Tello-Oquendo, L.; Pla, V.; Martínez Bauset, J. (2019). Deep Reinforcement Learning Mechanism for Dynamic Access Control in Wireless Networks Handling mMTC. Ad Hoc Networks. 94:1-14. https://doi.org/10.1016/j.adhoc.2019.101939S1149

    Prioritised Random Access Channel Protocols for Delay Critical M2M Communication over Cellular Networks

    Get PDF
    With the ever-increasing technological evolution, the current and future generation communication systems are geared towards accommodating Machine to Machine (M2M) communication as a necessary prerequisite for Internet of Things (IoT). Machine Type Communication (MTC) can sustain many promising applications through connecting a huge number of devices into one network. As current studies indicate, the number of devices is escalating at a high rate. Consequently, the network becomes congested because of its lower capacity, when the massive number of devices attempts simultaneous connection through the Random Access Channel (RACH). This results in RACH resource shortage, which can lead to high collision probability and massive access delay. Hence, it is critical to upgrade conventional Random Access (RA) techniques to support a massive number of Machine Type Communication (MTC) devices including Delay-Critical (DC) MTC. This thesis approaches to tackle this problem by modeling and optimising the access throughput and access delay performance of massive random access of M2M communications in Long-Term Evolution (LTE) networks. This thesis investigates the performance of different random access schemes in different scenarios. The study begins with the design and inspection of a group based 2-step Slotted-Aloha RACH (SA-RACH) scheme considering the coexistence of Human-to-Human (H2H) and M2M communication, the latter of which is categorised as: Delay-Critical user equipments (DC-UEs) and Non-Delay-Critical user equipments (NDC-UEs). Next, a novel RACH scheme termed the Priority-based Dynamic RACH (PD-RACH) model is proposed which utilises a coded preamble based collision probability model. Finally, being a key enabler of IoT, Machine Learning, i.e. a Q-learning based approach has been adopted, and a learning assisted Prioritised RACH scheme has been developed and investigated to prioritise a specific user group. In this work, the performance analysis of these novel RACH schemes show promising results compared to that of conventional RACH

    5GAuRA. D3.3: RAN Analytics Mechanisms and Performance Benchmarking of Video, Time Critical, and Social Applications

    Get PDF
    5GAuRA deliverable D3.3.This is the final deliverable of Work Package 3 (WP3) of the 5GAuRA project, providing a report on the project’s developments on the topics of Radio Access Network (RAN) analytics and application performance benchmarking. The focus of this deliverable is to extend and deepen the methods and results provided in the 5GAuRA deliverable D3.2 in the context of specific use scenarios of video, time critical, and social applications. In this respect, four major topics of WP3 of 5GAuRA – namely edge-cloud enhanced RAN architecture, machine learning assisted Random Access Channel (RACH) approach, Multi-access Edge Computing (MEC) content caching, and active queue management – are put forward. Specifically, this document provides a detailed discussion on the service level agreement between tenant and service provider in the context of network slicing in Fifth Generation (5G) communication networks. Network slicing is considered as a key enabler to 5G communication system. Legacy telecommunication networks have been providing various services to all kinds of customers through a single network infrastructure. In contrast, by deploying network slicing, operators are now able to partition one network into individual slices, each with its own configuration and Quality of Service (QoS) requirements. There are many applications across industry that open new business opportunities with new business models. Every application instance requires an independent slice with its own network functions and features, whereby every single slice needs an individual Service Level Agreement (SLA). In D3.3, we propose a comprehensive end-to-end structure of SLA between the tenant and the service provider of sliced 5G network, which balances the interests of both sides. The proposed SLA defines reliability, availability, and performance of delivered telecommunication services in order to ensure that right information is delivered to the right destination at right time, safely and securely. We also discuss the metrics of slicebased network SLA such as throughput, penalty, cost, revenue, profit, and QoS related metrics, which are, in the view of 5GAuRA, critical features of the agreement.Peer ReviewedPostprint (published version

    Cooperative Deep Reinforcement Learning for Multiple-Group NB-IoT Networks Optimization

    Full text link
    NarrowBand-Internet of Things (NB-IoT) is an emerging cellular-based technology that offers a range of flexible configurations for massive IoT radio access from groups of devices with heterogeneous requirements. A configuration specifies the amount of radio resources allocated to each group of devices for random access and for data transmission. Assuming no knowledge of the traffic statistics, the problem is to determine, in an online fashion at each Transmission Time Interval (TTI), the configurations that maximizes the long-term average number of IoT devices that are able to both access and deliver data. Given the complexity of optimal algorithms, a Cooperative Multi-Agent Deep Neural Network based Q-learning (CMA-DQN) approach is developed, whereby each DQN agent independently control a configuration variable for each group. The DQN agents are cooperatively trained in the same environment based on feedback regarding transmission outcomes. CMA-DQN is seen to considerably outperform conventional heuristic approaches based on load estimation.Comment: Submitted for conference publicatio

    Hierarchical beamforming in random access channels

    Get PDF
    Managing a massive number of terminals in a contention-based multiple access is challenging due to its intrinsic limited efficiency. For example, in the random access channel considered in LTE-A and 5G NR, Base Station (BS) is just aware of the collided and non-collided preambles. Several time-based protocols have been investigated to redistribute the overload under high terminal activity, thus avoiding the congestion. In this work, we explore the use of the spatial domain by means of a hierarchical codebook-based beamforming, where the BS selects the appropriate beams as a function of the number of non-collided and collided preambles. Since the activity and placement of terminals may be dynamic over time, the sequential selection of parameters can benefit from a reinforcement learning (RL) framework. We propose an algorithm that can exploit both domains, temporal and spatial, with the goal of reducing collisions and enhancing transmission delay. Our approach is able to efficiently learn whenever there is a non-homogeneous spatial distribution of terminals and adapt the spatial beams accordingly.The work of A.Agustin was supported by the Spanish Government through the Statistical Learning and Inference for Large Dimensional Communication Systems (ARISTIDES, RTI2018-099722-B-100) Project. The work of J.Vidal and M.Cabrera-Bean was supported by the project ROUTE56 (Agencia Estatal de Investigación, PID2019-104945GB-I00/AEI/10.13039/501100011033), and in part by the Grant 2017 SGR 578 (AGAUR, Generalitat de Catalunya).Peer ReviewedPostprint (author's final draft

    Congestion Control for Massive Machine-Type Communications: Distributed and Learning-Based Approaches

    Get PDF
    The Internet of things (IoT) is going to shape the future of wireless communications by allowing seamless connections among wide range of everyday objects. Machine-to-machine (M2M) communication is known to be the enabling technology for the development of IoT. With M2M, the devices are allowed to interact and exchange data without or with little human intervention. Recently, M2M communication, also referred to as machine-type communication (MTC), has received increased attention due to its potential to support diverse applications including eHealth, industrial automation, intelligent transportation systems, and smart grids. M2M communication is known to have specific features and requirements that differ from that of the traditional human-to-human (H2H) communication. As specified by the Third Generation Partnership Project (3GPP), MTC devices are inexpensive, low power, and mostly low mobility devices. Furthermore, MTC devices are usually characterized by infrequent, small amount of data, and mainly uplink traffic. Most importantly, the number of MTC devices is expected to highly surpass that of H2H devices. Smart cities are an example of such a mass-scale deployment. These features impose various challenges related to efficient energy management, enhanced coverage and diverse quality of service (QoS) provisioning, among others. The diverse applications of M2M are going to lead to exponential growth in M2M traffic. Associating with M2M deployment, a massive number of devices are expected to access the wireless network concurrently. Hence, a network congestion is likely to occur. Cellular networks have been recognized as excellent candidates for M2M support. Indeed, cellular networks are mature, well-established networks with ubiquitous coverage and reliability which allows cost-effective deployment of M2M communications. However, cellular networks were originally designed for human-centric services with high-cost devices and ever-increasing rate requirements. Additionally, the conventional random access (RA) mechanism used in Long Term Evolution-Advanced (LTE-A) networks lacks the capability of handling such an enormous number of access attempts expected from massive MTC. Particularly, this RA technique acts as a performance bottleneck due to the frequent collisions that lead to excessive delay and resource wastage. Also, the lengthy handshaking process of the conventional RA technique results in highly expensive signaling, specifically for M2M devices with small payloads. Therefore, designing an efficient medium access schemes is critical for the survival of M2M networks. In this thesis, we study the uplink access of M2M devices with a focus on overload control and congestion handling. In this regard, we mainly provide two different access techniques keeping in mind the distinct features and requirements of MTC including massive connectivity, latency reduction, and energy management. In fact, full information gathering is known to be impractical for such massive networks of tremendous number of devices. Hence, we assure to preserve the low complexity, and limited information exchange among different network entities by introducing distributed techniques. Furthermore, machine learning is also employed to enhance the performance with no or limited information exchange at the decision maker. The proposed techniques are assessed via extensive simulations as well as rigorous analytical frameworks. First, we propose an efficient distributed overload control algorithm for M2M with massive access, referred to as M2M-OSA. The proposed algorithm can efficiently allocate the available network resources to massive number of devices within relatively small, and bounded contention time and with reduced overhead. By resolving collisions, the proposed algorithm is capable of achieving full resources utilization along with reduced average access delay and energy saving. For Beta-distributed traffic, we provide analytical evaluation for the performance of the proposed algorithm in terms of the access delay, total service time, energy consumption, and blocking probability. This performance assessment accounted for various scenarios including slightly, and seriously congested cases, in addition to finite and infinite retransmission limits for the devices. Moreover, we provide a discussion of the non-ideal situations that could be encountered in real-life deployment of the proposed algorithm supported by possible solutions. For further energy saving, we introduced a modified version of M2M-OSA with traffic regulation mechanism. In the second part of the thesis, we adopt a promising alternative for the conventional random access mechanism, namely fast uplink grant. Fast uplink grant was first proposed by the 3GPP for latency reduction where it allows the base station (BS) to directly schedule the MTC devices (MTDs) without receiving any scheduling requests. In our work, to handle the major challenges associated to fast uplink grant namely, active set prediction and optimal scheduling, both non-orthogonal multiple access (NOMA) and learning techniques are utilized. Particularly, we propose a two-stage NOMA-based fast uplink grant scheme that first employs multi-armed bandit (MAB) learning to schedule the fast grant devices with no prior information about their QoS requirements or channel conditions at the BS. Afterwards, NOMA facilitates the grant sharing where pairing is done in a distributed manner to reduce signaling overhead. In the proposed scheme, NOMA plays a major role in decoupling the two major challenges of fast grant schemes by permitting pairing with only active MTDs. Consequently, the wastage of the resources due to traffic prediction errors can be significantly reduced. We devise an abstraction model for the source traffic predictor needed for fast grant such that the prediction error can be evaluated. Accordingly, the performance of the proposed scheme is analyzed in terms of average resource wastage, and outage probability. The simulation results show the effectiveness of the proposed method in saving the scarce resources while verifying the analysis accuracy. In addition, the ability of the proposed scheme to pick quality MTDs with strict latency is depicted

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Filtering Methods for Efficient Dynamic Access Control in 5G Massive Machine-Type Communication Scenarios

    Full text link
    [EN] One of the three main use cases of the fifth generation of mobile networks (5G) is massive machine-type communications (mMTC). The latter refers to the highly synchronized accesses to the cellular base stations from a great number of wireless devices, as a product of the automated exchange of small amounts of data. Clearly, an efficient mMTC is required to support the Internet-of-Things (IoT). Nevertheless, the method to change from idle to connected mode, known as the random access procedure (RAP), of 4G has been directly inherited by 5G, at least, until the first phase of standardization. Research has demonstrated the RAP is inefficient to support mMTC, hence, access control schemes are needed to obtain an adequate performance. In this paper, we compare the benefits of using different filtering methods to configure an access control scheme included in the 5G standards: the access class barring (ACB), according to the intensity of access requests. These filtering methods are a key component of our proposed ACB configuration scheme, which can lead to more than a three-fold increase in the probability of successfully completing the random access procedure under the most typical network configuration and mMTC scenario.This research has been supported in part by the Ministry of Economy and Competitiveness of Spain under Grant TIN2013-47272-C2-1-R and Grant TEC2015-71932-REDT. The research of I. Leyva-Mayorga was partially funded by grant 383936 CONACYT-GEM 2014.Leyva-Mayorga, I.; Rodríguez-Hernández, MA.; Pla, V.; Martínez Bauset, J. (2019). Filtering Methods for Efficient Dynamic Access Control in 5G Massive Machine-Type Communication Scenarios. Electronics. 8(1):1-18. https://doi.org/10.3390/electronics8010027S11881Laya, A., Alonso, L., & Alonso-Zarate, J. (2014). Is the Random Access Channel of LTE and LTE-A Suitable for M2M Communications? A Survey of Alternatives. IEEE Communications Surveys & Tutorials, 16(1), 4-16. doi:10.1109/surv.2013.111313.00244Biral, A., Centenaro, M., Zanella, A., Vangelista, L., & Zorzi, M. (2015). The challenges of M2M massive access in wireless cellular networks. Digital Communications and Networks, 1(1), 1-19. doi:10.1016/j.dcan.2015.02.001Tello-Oquendo, L., Leyva-Mayorga, I., Pla, V., Martinez-Bauset, J., Vidal, J.-R., Casares-Giner, V., & Guijarro, L. (2018). Performance Analysis and Optimal Access Class Barring Parameter Configuration in LTE-A Networks With Massive M2M Traffic. IEEE Transactions on Vehicular Technology, 67(4), 3505-3520. doi:10.1109/tvt.2017.2776868Tavana, M., Rahmati, A., & Shah-Mansouri, V. (2018). Congestion control with adaptive access class barring for LTE M2M overload using Kalman filters. Computer Networks, 141, 222-233. doi:10.1016/j.comnet.2018.01.044Lin, T.-M., Lee, C.-H., Cheng, J.-P., & Chen, W.-T. (2014). PRADA: Prioritized Random Access With Dynamic Access Barring for MTC in 3GPP LTE-A Networks. IEEE Transactions on Vehicular Technology, 63(5), 2467-2472. doi:10.1109/tvt.2013.2290128De Andrade, T. P. C., Astudillo, C. A., Sekijima, L. R., & Da Fonseca, N. L. S. (2017). The Random Access Procedure in Long Term Evolution Networks for the Internet of Things. IEEE Communications Magazine, 55(3), 124-131. doi:10.1109/mcom.2017.1600555cmWang, Z., & Wong, V. W. S. (2015). Optimal Access Class Barring for Stationary Machine Type Communication Devices With Timing Advance Information. IEEE Transactions on Wireless Communications, 14(10), 5374-5387. doi:10.1109/twc.2015.2437872Tello-Oquendo, L., Pacheco-Paramo, D., Pla, V., & Martinez-Bauset, J. (2018). Reinforcement Learning-Based ACB in LTE-A Networks for Handling Massive M2M and H2H Communications. 2018 IEEE International Conference on Communications (ICC). doi:10.1109/icc.2018.8422167Leyva-Mayorga, I., Rodriguez-Hernandez, M. A., Pla, V., Martinez-Bauset, J., & Tello-Oquendo, L. (2019). Adaptive access class barring for efficient mMTC. Computer Networks, 149, 252-264. doi:10.1016/j.comnet.2018.12.003Kalalas, C., & Alonso-Zarate, J. (2017). Reliability analysis of the random access channel of LTE with access class barring for smart grid monitoring traffic. 2017 IEEE International Conference on Communications Workshops (ICC Workshops). doi:10.1109/iccw.2017.7962744Leyva-Mayorga, I., Tello-Oquendo, L., Pla, V., Martinez-Bauset, J., & Casares-Giner, V. (2016). Performance analysis of access class barring for handling massive M2M traffic in LTE-A networks. 2016 IEEE International Conference on Communications (ICC). doi:10.1109/icc.2016.7510814Arouk, O., & Ksentini, A. (2016). General Model for RACH Procedure Performance Analysis. IEEE Communications Letters, 20(2), 372-375. doi:10.1109/lcomm.2015.2505280Zhang, Z., Chao, H., Wang, W., & Li, X. (2014). Performance Analysis and UE-Side Improvement of Extended Access Barring for Machine Type Communications in LTE. 2014 IEEE 79th Vehicular Technology Conference (VTC Spring). doi:10.1109/vtcspring.2014.7023042Cheng, R.-G., Chen, J., Chen, D.-W., & Wei, C.-H. (2015). Modeling and Analysis of an Extended Access Barring Algorithm for Machine-Type Communications in LTE-A Networks. IEEE Transactions on Wireless Communications, 14(6), 2956-2968. doi:10.1109/twc.2015.2398858Widrow, B., Glover, J. R., McCool, J. M., Kaunitz, J., Williams, C. S., Hearn, R. H., … Goodlin, R. C. (1975). Adaptive noise cancelling: Principles and applications. Proceedings of the IEEE, 63(12), 1692-1716. doi:10.1109/proc.1975.1003

    LSTM-Aided Hybrid Random Access Scheme for 6G Machine Type Communication Networks

    Full text link
    In this paper, an LSTM-aided hybrid random access scheme (LSTMH-RA) is proposed to support diverse quality of service (QoS) requirements in 6G machine-type communication (MTC) networks, where massive MTC (mMTC) devices and ultra-reliable low latency communications (URLLC) devices coexist. In the proposed LSTMH-RA scheme, mMTC devices access the network via a timing advance (TA)-aided four-step procedure to meet massive access requirement, while the access procedure of the URLLC devices is completed in two steps coupled with the mMTC devices' access procedure to reduce latency. Furthermore, we propose an attention-based LSTM prediction model to predict the number of active URLLC devices, thereby determining the parameters of the multi-user detection algorithm to guarantee the latency and reliability access requirements of URLLC devices.We analyze the successful access probability of the LSTMH-RA scheme. Numerical results show that, compared with the benchmark schemes, the proposed LSTMH-RA scheme can significantly improve the successful access probability, and thus satisfy the diverse QoS requirements of URLLC and mMTC devices

    Deep Reinforcement Learning for Real-Time Optimization in NB-IoT Networks

    Get PDF
    NarrowBand-Internet of Things (NB-IoT) is an emerging cellular-based technology that offers a range of flexible configurations for massive IoT radio access from groups of devices with heterogeneous requirements. A configuration specifies the amount of radio resource allocated to each group of devices for random access and for data transmission. Assuming no knowledge of the traffic statistics, there exists an important challenge in "how to determine the configuration that maximizes the long-term average number of served IoT devices at each Transmission Time Interval (TTI) in an online fashion". Given the complexity of searching for optimal configuration, we first develop real-time configuration selection based on the tabular Q-learning (tabular-Q), the Linear Approximation based Q-learning (LA-Q), and the Deep Neural Network based Q-learning (DQN) in the single-parameter single-group scenario. Our results show that the proposed reinforcement learning based approaches considerably outperform the conventional heuristic approaches based on load estimation (LE-URC) in terms of the number of served IoT devices. This result also indicates that LA-Q and DQN can be good alternatives for tabular-Q to achieve almost the same performance with much less training time. We further advance LA-Q and DQN via Actions Aggregation (AA-LA-Q and AA-DQN) and via Cooperative Multi-Agent learning (CMA-DQN) for the multi-parameter multi-group scenario, thereby solve the problem that Q-learning agents do not converge in high-dimensional configurations. In this scenario, the superiority of the proposed Q-learning approaches over the conventional LE-URC approach significantly improves with the increase of configuration dimensions, and the CMA-DQN approach outperforms the other approaches in both throughput and training efficiency
    corecore