68 research outputs found

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Design and analysis of LTE and wi-fi schemes for communications of massive machine devices

    Get PDF
    Existing communication technologies are designed with speciÿc use cases in mind, however, ex-tending these use cases usually throw up interesting challenges. For example, extending the use of existing cellular networks to emerging applications such as Internet of Things (IoT) devices throws up the challenge of handling massive number of devices. In this thesis, we are motivated to investigate existing schemes used in LTE and Wi-Fi for supporting massive machine devices and improve on observed performance gaps by designing new ones that outperform the former. This thesis investigates the existing random access protocol in LTE and proposes three schemes to combat massive device access challenge. The ÿrst is a root index reuse and allocation scheme which uses link budget calculations in extracting a safe distance for preamble reuse under vari-able cell size and also proposes an index allocation algorithm. Secondly, a dynamic subframe optimization scheme that combats the challenge from an optimisation solution perspective. Thirdly, the use of small cells for random access. Simulation and numerical analysis shows performance improvements against existing schemes in terms of throughput, access delay and probability of collision. In some cases, over 20% increase in performance was observed. The proposed schemes provide quicker and more guaranteed opportunities for machine devices to communicate. Also, in Wi-Fi networks, adaptation of the transmission rates to the dynamic channel condi-tions is a major challenge. Two algorithms were proposed to combat this. The ÿrst makes use of contextual information to determine the network state and respond appropriately whilst the second samples candidate transmission modes and uses the e˛ective throughput to make a deci-sion. The proposed algorithms were compared to several existing rate adaptation algorithms by simulations and under various system and channel conÿgurations. They show signiÿcant per-formance improvements, in terms of throughput, thus, conÿrming their suitability for dynamic channel conditions

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC) and Ultra-Reliable and Low Latency Communications (URLLC), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include Quality of Service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenario along with the recent advances towards enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions

    5GAuRA. D3.3: RAN Analytics Mechanisms and Performance Benchmarking of Video, Time Critical, and Social Applications

    Get PDF
    5GAuRA deliverable D3.3.This is the final deliverable of Work Package 3 (WP3) of the 5GAuRA project, providing a report on the project’s developments on the topics of Radio Access Network (RAN) analytics and application performance benchmarking. The focus of this deliverable is to extend and deepen the methods and results provided in the 5GAuRA deliverable D3.2 in the context of specific use scenarios of video, time critical, and social applications. In this respect, four major topics of WP3 of 5GAuRA – namely edge-cloud enhanced RAN architecture, machine learning assisted Random Access Channel (RACH) approach, Multi-access Edge Computing (MEC) content caching, and active queue management – are put forward. Specifically, this document provides a detailed discussion on the service level agreement between tenant and service provider in the context of network slicing in Fifth Generation (5G) communication networks. Network slicing is considered as a key enabler to 5G communication system. Legacy telecommunication networks have been providing various services to all kinds of customers through a single network infrastructure. In contrast, by deploying network slicing, operators are now able to partition one network into individual slices, each with its own configuration and Quality of Service (QoS) requirements. There are many applications across industry that open new business opportunities with new business models. Every application instance requires an independent slice with its own network functions and features, whereby every single slice needs an individual Service Level Agreement (SLA). In D3.3, we propose a comprehensive end-to-end structure of SLA between the tenant and the service provider of sliced 5G network, which balances the interests of both sides. The proposed SLA defines reliability, availability, and performance of delivered telecommunication services in order to ensure that right information is delivered to the right destination at right time, safely and securely. We also discuss the metrics of slicebased network SLA such as throughput, penalty, cost, revenue, profit, and QoS related metrics, which are, in the view of 5GAuRA, critical features of the agreement.Peer ReviewedPostprint (published version

    Priority-based initial access for URLLC traffic in massive IoT networks: Schemes and performance analysis

    Get PDF
    At a density of one million devices per square kilometer, the10’s of billions of devices, objects, and machines that form a massive Internet of things (mIoT) require ubiquitous connectivity. Among a massive number of IoT devices, a portion of them require ultra-reliable low latency communication (URLLC) provided via fifth generation (5G) networks, bringing many new challenges due to the stringent service requirements. Albeit a surge of research efforts on URLLC and mIoT, access mechanisms which include both URLLC and massive machine type communications (mMTC) have not yet been investigated in-depth. In this paper, we propose three novel schemes to facilitate priority-based initial access for mIoT/mMTC devices that require URLLC services while also considering the requirements of other mIoT/mMTC devices. Based on a long term evolution-advanced (LTEA) or 5G new radio frame structure, the proposed schemes enable device grouping based on device vicinity or/and their URLLC requirements and allocate dedicated preambles for grouped devices supported by flexible slot allocation for random access. These schemes are able not only to increase the reliability and minimize the delay of URLLC devices but also to improve the performance of all involved mIoT devices. Furthermore, we evaluate the performance of the proposed schemes through mathematical analysis as well as simulations and compare the results with the performance of both the legacy LTE-A based initial access scheme and a grant-free transmission scheme.acceptedVersio

    Contributions to IEEE 802.11-based long range communications

    Get PDF
    The most essential part of the Internet of Things (IoT) infrastructure is the wireless communication system that acts as a bridge for the delivery of data and control messages between the connected things and the Internet. Since the conception of the IoT, a large number of promising applications and technologies have been developed, which will change different aspects in our daily life. However, the existing wireless technologies lack the ability to support a huge amount of data exchange from many battery-driven devices, spread over a wide area. In order to support the IoT paradigm, IEEE 802.11ah is an Internet of Things enabling technology, where the efficient management of thousands of devices is a key function. This is one of the most promising and appealing standards, which aims to bridge the gap between traditional mobile networks and the demands of the IoT. To this aim, IEEE 802.11ah provides the Restricted Access Window (RAW) mechanism, which reduces contention by enabling transmissions for small groups of stations. Optimal grouping of RAW stations requires an evaluation of many possible configurations. In this thesis, we first discuss the main PHY and MAC layer amendments proposed for IEEE 802.11ah. Furthermore, we investigate the operability of IEEE 802.11ah as a backhaul link to connect devices over possibly long distances. Additionally, we compare the aforementioned standard with previous notable IEEE 802.11 amendments (i.e. IEEE 802.11n and IEEE 802.11ac) in terms of throughput (with and without frame aggregation) by utilizing the most robust modulation schemes. The results show an improved performance of IEEE 802.11ah (in terms of power received at long range while experiencing different packet error rates) as compared to previous IEEE 802.11 standards. Additionally, we expose the capabilities of future IEEE 802.11ah in supporting different IoT applications. In addition, we provide a brief overview of the technology contenders that are competing to cover the IoT communications framework. Numerical results are presented showing how the future IEEE 802.11ah specification offers the features required by IoT communications, thus putting forward IEEE 802.11ah as a technology to cater the needs of the Internet of Things paradigm. Finally, we propose an analytical model (named e-model) that provides an evaluation of the RAW onfiguration performance, allowing a fast adaptation of RAW grouping policies, in accordance to varying channel conditions. We base the e-model in known saturation models, which we adapted to include the IEEE 802.11ah’s PHY and MAC layer modifications and to support different bit rate and packet sizes. As a proof of concept, we use the proposed model to compare the performance of different grouping strategies,showing that the e-model is a useful analysis tool in RAW-enabled scenarios. We validate the model with existing IEEE 802.11ah implementation for ns-3.La clave del concepto Internet de las cosas (IoT) es que utiliza un sistema de comunicación inalámbrica, el cual actúa como puente para la entrega de datos y mensajes de control entre las "cosas" conectadas y el Internet. Desde la concepción del IoT, se han desarrollado gran cantidad de aplicaciones y tecnologías prometedoras que cambiarán distintos aspectos de nuestra vida diaria.Sin embargo, las tecnologías de redes computacionales inalámbricas existentes carecen de la capacidad de soportar las características del IoT, como las grandes cantidades de envío y recepción de datos desde múltiples dispositivos distribuidos en un área amplia, donde los dispositivos IoT funcionan con baterías. Para respaldar el paradigma del IoT, IEEE 802.11ah, la cual es una tecnología habilitadora del Internet de las cosas, para el cual la gestión eficiente de miles de dispositivos es una función clave. IEEE 802.11ah es uno de los estándares más prometedores y atractivos, desde su concepción orientada para IoT, su objetivo principal es cerrar la brecha entre las redes móviles tradicionales y la demandada por el IoT. Con este objetivo en mente, IEEE 802.11ah incluye entre sus características especificas el mecanismo de ventana de acceso restringido (RAW, por sus siglas en ingles), el cual define un nuevo período de acceso al canal libre de contención, reduciendo la misma al permitir transmisiones para pequeños grupos de estaciones. Nótese que para obtener una agrupación óptima de estaciones RAW, se requiere una evaluación de las distintas configuraciones posibles. En esta tesis, primero discutimos las principales mejoras de las capas PHY y MAC propuestas para IEEE 802.11ah. Además, investigamos la operatividad de IEEE 802.11ah como enlace de backhaul para conectar dispositivos a distancias largas. También, comparamos el estándar antes mencionado con las notables especificaciones IEEE 802.11 anteriores (es decir, IEEE 802.11n y IEEE 802.11ac), en términos de rendimiento (incluyendo y excluyendo la agregación de tramas de datos) y utilizando los esquemas de modulación más robustos. Los resultados muestran mejores resultados en cuanto al rendimiento de IEEE 802.11ah (en términos de potencia recibida a largo alcance, mientras se experimentan diferentes tasas de error de paquetes de datos) en comparación con los estándares IEEE 802.11 anteriores.Además, exponemos las capacidades de IEEE 802.11ah para admitir diferentes aplicaciones de IoT. A su vez, proporcionamos una descripción general de los competidores tecnológicos, los cuales contienden para cubrir el marco de comunicaciones IoT. También se presentan resultados numéricos que muestran cómo la especificación IEEE 802.11ah ofrece las características requeridas por las comunicaciones IoT, presentando así a IEEE 802.11ah como una tecnología que puede satisfacer las necesidades del paradigma de Internet de las cosas.Finalmente, proponemos un modelo analítico (denominado e-model) que proporciona una evaluación del rendimiento utilizando la característica RAW con múltiples configuraciones, el cual permite una rápida adaptación de las políticas de agrupación RAW, de acuerdo con las diferentes condiciones del canal de comunicación. Basamos el e-model en modelos de saturación conocidos, que adaptamos para incluir las modificaciones de la capa MAC y PHY de IEEE 802.11ah y para poder admitir diferentes velocidades de transmisión de datos y tamaños de paquetes. Como prueba de concepto, utilizamos el modelo propuesto para comparar el desempeño de diferentes estrategias de agrupación, mostrando que el e-model es una herramienta de análisis útil en escenarios habilitados para RAW. Cabe mencionar que también validamos el modelo con la implementación IEEE 802.11ah existente para ns-3

    Towards reliable communication in LTE-A connected heterogeneous machine to machine network

    Get PDF
    Machine to machine (M2M) communication is an emerging technology that enables heterogeneous devices to communicate with each other without human intervention and thus forming so-called Internet of Things (IoTs). Wireless cellular networks (WCNs) play a significant role in the successful deployment of M2M communication. Specially the ongoing massive deployment of long term evolution advanced (LTE-A) makes it possible to establish machine type communication (MTC) in most urban and remote areas, and by using LTE-A backhaul network, a seamless network communication is being established between MTC-devices and-applications. However, the extensive network coverage does not ensure a successful implementation of M2M communication in the LTE-A, and therefore there are still some challenges. Energy efficient reliable transmission is perhaps the most compelling demand for various M2M applications. Among the factors affecting reliability of M2M communication are the high endto-end delay and high bit error rate. The objective of the thesis is to provide reliable M2M communication in LTE-A network. In this aim, to alleviate the signalling congestion on air interface and efficient data aggregation we consider a cluster based architecture where the MTC devices are grouped into number of clusters and traffics are forwarded through some special nodes called cluster heads (CHs) to the base station (BS) using single or multi-hop transmissions. In many deployment scenarios, some machines are allowed to move and change their location in the deployment area with very low mobility. In practice, the performance of data transmission often degrades with the increase of distance between neighboring CHs. CH needs to be reselected in such cases. However, frequent re-selection of CHs results in counter effect on routing and reconfiguration of resource allocation associated with CH-dependent protocols. In addition, the link quality between a CH-CH and CH-BS are very often affected by various dynamic environmental factors such as heat and humidity, obstacles and RF interferences. Since CH aggregates the traffic from all cluster members, failure of the CH means that the full cluster will fail. Many solutions have been proposed to combat with error prone wireless channel such as automatic repeat request (ARQ) and multipath routing. Though the above mentioned techniques improve the communication reliability but intervene the communication efficiency. In the former scheme, the transmitter retransmits the whole packet even though the part of the packet has been received correctly and in the later one, the receiver may receive the same information from multiple paths; thus both techniques are bandwidth and energy inefficient. In addition, with retransmission, overall end to end delay may exceed the maximum allowable delay budget. Based on the aforementioned observations, we identify CH-to-CH channel is one of the bottlenecks to provide reliable communication in cluster based multihop M2M network and present a full solution to support fountain coded cooperative communications. Our solution covers many aspects from relay selection to cooperative formation to meet the user’s QoS requirements. In the first part of the thesis, we first design a rateless-coded-incremental-relay selection (RCIRS) algorithm based on greedy techniques to guarantee the required data rate with a minimum cost. After that, we develop fountain coded cooperative communication protocols to facilitate the data transmission between two neighbor CHs. In the second part, we propose joint network and fountain coding schemes for reliable communication. Through coupling channel coding and network coding simultaneously in the physical layer, joint network and fountain coding schemes efficiently exploit the redundancy of both codes and effectively combat the detrimental effect of fading conditions in wireless channels. In the proposed scheme, after correctly decoding the information from different sources, a relay node applies network and fountain coding on the received signals and then transmits to the destination in a single transmission. Therefore, the proposed schemes exploit the diversity and coding gain to improve the system performance. In the third part, we focus on the reliable uplink transmission between CHs and BS where CHs transmit to BS directly or with the help of the LTE-A relay nodes (RN). We investigate both type-I and type-II enhanced LTE-A networks and propose a set of joint network and fountain coding schemes to enhance the link robustness. Finally, the proposed solutions are evaluated through extensive numerical simulations and the numerical results are presented to provide a comparison with the related works found in the literature

    D4.3 Final Report on Network-Level Solutions

    Full text link
    Research activities in METIS reported in this document focus on proposing solutions to the network-level challenges of future wireless communication networks. Thereby, a large variety of scenarios is considered and a set of technical concepts is proposed to serve the needs envisioned for the 2020 and beyond. This document provides the final findings on several network-level aspects and groups of solutions that are considered essential for designing future 5G solutions. Specifically, it elaborates on: -Interference management and resource allocation schemes -Mobility management and robustness enhancements -Context aware approaches -D2D and V2X mechanisms -Technology components focused on clustering -Dynamic reconfiguration enablers These novel network-level technology concepts are evaluated against requirements defined by METIS for future 5G systems. Moreover, functional enablers which can support the solutions mentioned aboveare proposed. We find that the network level solutions and technology components developed during the course of METIS complement the lower layer technology components and thereby effectively contribute to meeting 5G requirements and targets.Aydin, O.; Valentin, S.; Ren, Z.; Botsov, M.; Lakshmana, TR.; Sui, Y.; Sun, W.... (2015). D4.3 Final Report on Network-Level Solutions. http://hdl.handle.net/10251/7675
    corecore