375 research outputs found

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Contributions to IEEE 802.11-based long range communications

    Get PDF
    The most essential part of the Internet of Things (IoT) infrastructure is the wireless communication system that acts as a bridge for the delivery of data and control messages between the connected things and the Internet. Since the conception of the IoT, a large number of promising applications and technologies have been developed, which will change different aspects in our daily life. However, the existing wireless technologies lack the ability to support a huge amount of data exchange from many battery-driven devices, spread over a wide area. In order to support the IoT paradigm, IEEE 802.11ah is an Internet of Things enabling technology, where the efficient management of thousands of devices is a key function. This is one of the most promising and appealing standards, which aims to bridge the gap between traditional mobile networks and the demands of the IoT. To this aim, IEEE 802.11ah provides the Restricted Access Window (RAW) mechanism, which reduces contention by enabling transmissions for small groups of stations. Optimal grouping of RAW stations requires an evaluation of many possible configurations. In this thesis, we first discuss the main PHY and MAC layer amendments proposed for IEEE 802.11ah. Furthermore, we investigate the operability of IEEE 802.11ah as a backhaul link to connect devices over possibly long distances. Additionally, we compare the aforementioned standard with previous notable IEEE 802.11 amendments (i.e. IEEE 802.11n and IEEE 802.11ac) in terms of throughput (with and without frame aggregation) by utilizing the most robust modulation schemes. The results show an improved performance of IEEE 802.11ah (in terms of power received at long range while experiencing different packet error rates) as compared to previous IEEE 802.11 standards. Additionally, we expose the capabilities of future IEEE 802.11ah in supporting different IoT applications. In addition, we provide a brief overview of the technology contenders that are competing to cover the IoT communications framework. Numerical results are presented showing how the future IEEE 802.11ah specification offers the features required by IoT communications, thus putting forward IEEE 802.11ah as a technology to cater the needs of the Internet of Things paradigm. Finally, we propose an analytical model (named e-model) that provides an evaluation of the RAW onfiguration performance, allowing a fast adaptation of RAW grouping policies, in accordance to varying channel conditions. We base the e-model in known saturation models, which we adapted to include the IEEE 802.11ah’s PHY and MAC layer modifications and to support different bit rate and packet sizes. As a proof of concept, we use the proposed model to compare the performance of different grouping strategies,showing that the e-model is a useful analysis tool in RAW-enabled scenarios. We validate the model with existing IEEE 802.11ah implementation for ns-3.La clave del concepto Internet de las cosas (IoT) es que utiliza un sistema de comunicación inalámbrica, el cual actúa como puente para la entrega de datos y mensajes de control entre las "cosas" conectadas y el Internet. Desde la concepción del IoT, se han desarrollado gran cantidad de aplicaciones y tecnologías prometedoras que cambiarán distintos aspectos de nuestra vida diaria.Sin embargo, las tecnologías de redes computacionales inalámbricas existentes carecen de la capacidad de soportar las características del IoT, como las grandes cantidades de envío y recepción de datos desde múltiples dispositivos distribuidos en un área amplia, donde los dispositivos IoT funcionan con baterías. Para respaldar el paradigma del IoT, IEEE 802.11ah, la cual es una tecnología habilitadora del Internet de las cosas, para el cual la gestión eficiente de miles de dispositivos es una función clave. IEEE 802.11ah es uno de los estándares más prometedores y atractivos, desde su concepción orientada para IoT, su objetivo principal es cerrar la brecha entre las redes móviles tradicionales y la demandada por el IoT. Con este objetivo en mente, IEEE 802.11ah incluye entre sus características especificas el mecanismo de ventana de acceso restringido (RAW, por sus siglas en ingles), el cual define un nuevo período de acceso al canal libre de contención, reduciendo la misma al permitir transmisiones para pequeños grupos de estaciones. Nótese que para obtener una agrupación óptima de estaciones RAW, se requiere una evaluación de las distintas configuraciones posibles. En esta tesis, primero discutimos las principales mejoras de las capas PHY y MAC propuestas para IEEE 802.11ah. Además, investigamos la operatividad de IEEE 802.11ah como enlace de backhaul para conectar dispositivos a distancias largas. También, comparamos el estándar antes mencionado con las notables especificaciones IEEE 802.11 anteriores (es decir, IEEE 802.11n y IEEE 802.11ac), en términos de rendimiento (incluyendo y excluyendo la agregación de tramas de datos) y utilizando los esquemas de modulación más robustos. Los resultados muestran mejores resultados en cuanto al rendimiento de IEEE 802.11ah (en términos de potencia recibida a largo alcance, mientras se experimentan diferentes tasas de error de paquetes de datos) en comparación con los estándares IEEE 802.11 anteriores.Además, exponemos las capacidades de IEEE 802.11ah para admitir diferentes aplicaciones de IoT. A su vez, proporcionamos una descripción general de los competidores tecnológicos, los cuales contienden para cubrir el marco de comunicaciones IoT. También se presentan resultados numéricos que muestran cómo la especificación IEEE 802.11ah ofrece las características requeridas por las comunicaciones IoT, presentando así a IEEE 802.11ah como una tecnología que puede satisfacer las necesidades del paradigma de Internet de las cosas.Finalmente, proponemos un modelo analítico (denominado e-model) que proporciona una evaluación del rendimiento utilizando la característica RAW con múltiples configuraciones, el cual permite una rápida adaptación de las políticas de agrupación RAW, de acuerdo con las diferentes condiciones del canal de comunicación. Basamos el e-model en modelos de saturación conocidos, que adaptamos para incluir las modificaciones de la capa MAC y PHY de IEEE 802.11ah y para poder admitir diferentes velocidades de transmisión de datos y tamaños de paquetes. Como prueba de concepto, utilizamos el modelo propuesto para comparar el desempeño de diferentes estrategias de agrupación, mostrando que el e-model es una herramienta de análisis útil en escenarios habilitados para RAW. Cabe mencionar que también validamos el modelo con la implementación IEEE 802.11ah existente para ns-3

    Statistical priority-based uplink scheduling for M2M communications

    Get PDF
    Currently, the worldwide network is witnessing major efforts to transform it from being the Internet of humans only to becoming the Internet of Things (IoT). It is expected that Machine Type Communication Devices (MTCDs) will overwhelm the cellular networks with huge traffic of data that they collect from their environments to be sent to other remote MTCDs for processing thus forming what is known as Machine-to-Machine (M2M) communications. Long Term Evolution (LTE) and LTE-Advanced (LTE-A) appear as the best technology to support M2M communications due to their native IP support. LTE can provide high capacity, flexible radio resource allocation and scalability, which are the required pillars for supporting the expected large numbers of deployed MTCDs. Supporting M2M communications over LTE faces many challenges. These challenges include medium access control and the allocation of radio resources among MTCDs. The problem of radio resources allocation, or scheduling, originates from the nature of M2M traffic. This traffic consists of a large number of small data packets, with specific deadlines, generated by a potentially massive number of MTCDs. M2M traffic is therefore mostly in the uplink direction, i.e. from MTCDs to the base station (known as eNB in LTE terminology). These characteristics impose some design requirements on M2M scheduling techniques such as the need to use insufficient radio resources to transmit a huge amount of traffic within certain deadlines. This presents the main motivation behind this thesis work. In this thesis, we introduce a novel M2M scheduling scheme that utilizes what we term the “statistical priority” in determining the importance of information carried by data packets. Statistical priority is calculated based on the statistical features of the data such as value similarity, trend similarity and auto-correlation. These calculations are made and then reported by the MTCDs to the serving eNBs along with other reports such as channel state. Statistical priority is then used to assign priorities to data packets so that the scarce radio resources are allocated to the MTCDs that are sending statistically important information. This would help avoid exploiting limited radio resources to carry redundant or repetitive data which is a common situation in M2M communications. In order to validate our technique, we perform a simulation-based comparison among the main scheduling techniques and our proposed statistical priority-based scheduling technique. This comparison was conducted in a network that includes different types of MTCDs, such as environmental monitoring sensors, surveillance cameras and alarms. The results show that our proposed statistical priority-based scheduler outperforms the other schedulers in terms of having the least losses of alarm data packets and the highest rate in sending critical data packets that carry non-redundant information for both environmental monitoring and video traffic. This indicates that the proposed technique is the most efficient in the utilization of limited radio resources as compared to the other techniques

    Congestion Control for Massive Machine-Type Communications: Distributed and Learning-Based Approaches

    Get PDF
    The Internet of things (IoT) is going to shape the future of wireless communications by allowing seamless connections among wide range of everyday objects. Machine-to-machine (M2M) communication is known to be the enabling technology for the development of IoT. With M2M, the devices are allowed to interact and exchange data without or with little human intervention. Recently, M2M communication, also referred to as machine-type communication (MTC), has received increased attention due to its potential to support diverse applications including eHealth, industrial automation, intelligent transportation systems, and smart grids. M2M communication is known to have specific features and requirements that differ from that of the traditional human-to-human (H2H) communication. As specified by the Third Generation Partnership Project (3GPP), MTC devices are inexpensive, low power, and mostly low mobility devices. Furthermore, MTC devices are usually characterized by infrequent, small amount of data, and mainly uplink traffic. Most importantly, the number of MTC devices is expected to highly surpass that of H2H devices. Smart cities are an example of such a mass-scale deployment. These features impose various challenges related to efficient energy management, enhanced coverage and diverse quality of service (QoS) provisioning, among others. The diverse applications of M2M are going to lead to exponential growth in M2M traffic. Associating with M2M deployment, a massive number of devices are expected to access the wireless network concurrently. Hence, a network congestion is likely to occur. Cellular networks have been recognized as excellent candidates for M2M support. Indeed, cellular networks are mature, well-established networks with ubiquitous coverage and reliability which allows cost-effective deployment of M2M communications. However, cellular networks were originally designed for human-centric services with high-cost devices and ever-increasing rate requirements. Additionally, the conventional random access (RA) mechanism used in Long Term Evolution-Advanced (LTE-A) networks lacks the capability of handling such an enormous number of access attempts expected from massive MTC. Particularly, this RA technique acts as a performance bottleneck due to the frequent collisions that lead to excessive delay and resource wastage. Also, the lengthy handshaking process of the conventional RA technique results in highly expensive signaling, specifically for M2M devices with small payloads. Therefore, designing an efficient medium access schemes is critical for the survival of M2M networks. In this thesis, we study the uplink access of M2M devices with a focus on overload control and congestion handling. In this regard, we mainly provide two different access techniques keeping in mind the distinct features and requirements of MTC including massive connectivity, latency reduction, and energy management. In fact, full information gathering is known to be impractical for such massive networks of tremendous number of devices. Hence, we assure to preserve the low complexity, and limited information exchange among different network entities by introducing distributed techniques. Furthermore, machine learning is also employed to enhance the performance with no or limited information exchange at the decision maker. The proposed techniques are assessed via extensive simulations as well as rigorous analytical frameworks. First, we propose an efficient distributed overload control algorithm for M2M with massive access, referred to as M2M-OSA. The proposed algorithm can efficiently allocate the available network resources to massive number of devices within relatively small, and bounded contention time and with reduced overhead. By resolving collisions, the proposed algorithm is capable of achieving full resources utilization along with reduced average access delay and energy saving. For Beta-distributed traffic, we provide analytical evaluation for the performance of the proposed algorithm in terms of the access delay, total service time, energy consumption, and blocking probability. This performance assessment accounted for various scenarios including slightly, and seriously congested cases, in addition to finite and infinite retransmission limits for the devices. Moreover, we provide a discussion of the non-ideal situations that could be encountered in real-life deployment of the proposed algorithm supported by possible solutions. For further energy saving, we introduced a modified version of M2M-OSA with traffic regulation mechanism. In the second part of the thesis, we adopt a promising alternative for the conventional random access mechanism, namely fast uplink grant. Fast uplink grant was first proposed by the 3GPP for latency reduction where it allows the base station (BS) to directly schedule the MTC devices (MTDs) without receiving any scheduling requests. In our work, to handle the major challenges associated to fast uplink grant namely, active set prediction and optimal scheduling, both non-orthogonal multiple access (NOMA) and learning techniques are utilized. Particularly, we propose a two-stage NOMA-based fast uplink grant scheme that first employs multi-armed bandit (MAB) learning to schedule the fast grant devices with no prior information about their QoS requirements or channel conditions at the BS. Afterwards, NOMA facilitates the grant sharing where pairing is done in a distributed manner to reduce signaling overhead. In the proposed scheme, NOMA plays a major role in decoupling the two major challenges of fast grant schemes by permitting pairing with only active MTDs. Consequently, the wastage of the resources due to traffic prediction errors can be significantly reduced. We devise an abstraction model for the source traffic predictor needed for fast grant such that the prediction error can be evaluated. Accordingly, the performance of the proposed scheme is analyzed in terms of average resource wastage, and outage probability. The simulation results show the effectiveness of the proposed method in saving the scarce resources while verifying the analysis accuracy. In addition, the ability of the proposed scheme to pick quality MTDs with strict latency is depicted

    LTE Network Enhancement for Vehicular Safety Communication

    Get PDF

    Design and analysis of LTE and wi-fi schemes for communications of massive machine devices

    Get PDF
    Existing communication technologies are designed with speciÿc use cases in mind, however, ex-tending these use cases usually throw up interesting challenges. For example, extending the use of existing cellular networks to emerging applications such as Internet of Things (IoT) devices throws up the challenge of handling massive number of devices. In this thesis, we are motivated to investigate existing schemes used in LTE and Wi-Fi for supporting massive machine devices and improve on observed performance gaps by designing new ones that outperform the former. This thesis investigates the existing random access protocol in LTE and proposes three schemes to combat massive device access challenge. The ÿrst is a root index reuse and allocation scheme which uses link budget calculations in extracting a safe distance for preamble reuse under vari-able cell size and also proposes an index allocation algorithm. Secondly, a dynamic subframe optimization scheme that combats the challenge from an optimisation solution perspective. Thirdly, the use of small cells for random access. Simulation and numerical analysis shows performance improvements against existing schemes in terms of throughput, access delay and probability of collision. In some cases, over 20% increase in performance was observed. The proposed schemes provide quicker and more guaranteed opportunities for machine devices to communicate. Also, in Wi-Fi networks, adaptation of the transmission rates to the dynamic channel condi-tions is a major challenge. Two algorithms were proposed to combat this. The ÿrst makes use of contextual information to determine the network state and respond appropriately whilst the second samples candidate transmission modes and uses the e˛ective throughput to make a deci-sion. The proposed algorithms were compared to several existing rate adaptation algorithms by simulations and under various system and channel conÿgurations. They show signiÿcant per-formance improvements, in terms of throughput, thus, conÿrming their suitability for dynamic channel conditions

    D4.3 Final Report on Network-Level Solutions

    Full text link
    Research activities in METIS reported in this document focus on proposing solutions to the network-level challenges of future wireless communication networks. Thereby, a large variety of scenarios is considered and a set of technical concepts is proposed to serve the needs envisioned for the 2020 and beyond. This document provides the final findings on several network-level aspects and groups of solutions that are considered essential for designing future 5G solutions. Specifically, it elaborates on: -Interference management and resource allocation schemes -Mobility management and robustness enhancements -Context aware approaches -D2D and V2X mechanisms -Technology components focused on clustering -Dynamic reconfiguration enablers These novel network-level technology concepts are evaluated against requirements defined by METIS for future 5G systems. Moreover, functional enablers which can support the solutions mentioned aboveare proposed. We find that the network level solutions and technology components developed during the course of METIS complement the lower layer technology components and thereby effectively contribute to meeting 5G requirements and targets.Aydin, O.; Valentin, S.; Ren, Z.; Botsov, M.; Lakshmana, TR.; Sui, Y.; Sun, W.... (2015). D4.3 Final Report on Network-Level Solutions. http://hdl.handle.net/10251/7675

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC) and Ultra-Reliable and Low Latency Communications (URLLC), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include Quality of Service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenario along with the recent advances towards enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions
    corecore