87 research outputs found

    On Device Grouping for Efficient Multicast Communications in Narrowband-IoT

    Get PDF

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Towards efficient support for massive Internet of Things over cellular networks

    Get PDF
    The usage of Internet of Things (IoT) devices over cellular networks is seeing tremendous growth in recent years, and that growth in only expected to increase in the near future. While existing 4G and 5G cellular networks offer several desirable features for this type of applications, their design has historically focused on accommodating traditional mobile devices (e.g. smartphones). As IoT devices have very different characteristics and use cases, they create a range of problems to current networks which often struggle to accommodate them at scale. Although newer cellular network technologies, such as Narrowband-IoT (NB-IoT), were designed to focus on the IoT characteristics, they were extensively based on 4G and 5G networks to preserve interoperability, and decrease their deployment cost. As such, several inefficiencies of 4G/5G were also carried over to the newer technologies. This thesis focuses on identifying the core issues that hinder the large scale deployment of IoT over cellular networks, and proposes novel protocols to largely alleviate them. We find that the most significant challenges arise mainly in three distinct areas: connection establishment, network resource utilisation and device energy efficiency. Specifically, we make the following contributions. First, we focus on the connection establishment process and argue that the current procedures, when used by IoT devices, result in increased numbers of collisions, network outages and a signalling overhead that is disproportionate to the size of the data transmitted, and the connection duration of IoT devices. Therefore, we propose two mechanisms to alleviate these inefficiencies. Our first mechanism, named ASPIS, focuses on both the number of collisions and the signalling overhead simultaneously, and provides enhancements to increase the number of successful IoT connections, without disrupting existing background traffic. Our second mechanism focuses specifically on the collisions at the connection establishment process, and used a novel approach with Reinforcement Learning, to decrease their number and allow a larger number of IoT devices to access the network with fewer attempts. Second, we propose a new multicasting mechanism to reduce network resource utilisation in NB-IoT networks, by delivering common content (e.g. firmware updates) to multiple similar devices simultaneously. Notably, our mechanism is both more efficient during multicast data transmission, but also frees up resources that would otherwise be perpetually reserved for multicast signalling under the existing scheme. Finally, we focus on energy efficiency and propose novel protocols that are designed for the unique usage characteristics of NB-IoT devices, in order to reduce the device power consumption. Towards this end, we perform a detailed energy consumption analysis, which we use as a basis to develop an energy consumption model for realistic energy consumption assessment. We then take the insights from our analysis, and propose optimisations to significantly reduce the energy consumption of IoT devices, and assess their performance

    Energy efficiency in short and wide-area IoT technologies—A survey

    Get PDF
    In the last years, the Internet of Things (IoT) has emerged as a key application context in the design and evolution of technologies in the transition toward a 5G ecosystem. More and more IoT technologies have entered the market and represent important enablers in the deployment of networks of interconnected devices. As network and spatial device densities grow, energy efficiency and consumption are becoming an important aspect in analyzing the performance and suitability of different technologies. In this framework, this survey presents an extensive review of IoT technologies, including both Low-Power Short-Area Networks (LPSANs) and Low-Power Wide-Area Networks (LPWANs), from the perspective of energy efficiency and power consumption. Existing consumption models and energy efficiency mechanisms are categorized, analyzed and discussed, in order to highlight the main trends proposed in literature and standards toward achieving energy-efficient IoT networks. Current limitations and open challenges are also discussed, aiming at highlighting new possible research directions

    Software Defined Radio for NB-IoT

    Get PDF
    The next generation of mobile radio systems is expected to providing wireless connectivity for a wide range of new applications and services involving not only people but also machines and objects. Within few years, billions of low-cost and low-complexity devices and sensors will be connected to the Internet, forming a converged ecosystem called Internet of Things (IoT). As a result, in 2016, 3GPP standardizes NB-IoT, the new narrowband radio technology developed for the IoT market. Massive connectivity, reduced UE complexity, coverage extension and deployment flexibility are the targets for this new radio interface, which also ensures harmonious coexistence with current GSM, GPRS and LTE systems. In parallel, the rise of open-source software combined with Software Defined Radio (SDR) solutions has completely changed radio systems engineering in the late years. This thesis focuses on developing the NB-IoT’s protocol stack on the EURECOM’s open-source software platform OpenAirInterface (OAI). First part of this work aims to implement NB-IoT’s Radio Resource Control functionalities on OAI. After an introduction to the platform architecture, a new RRC layer code structure and related interfaces are defined, along with a new approach for Signalling Radio Bearers management. A deep analysis on System Information scheduling is conducted and a subframe-based transmission scheme is then proposed. The last part of this thesis addresses the implementation of a multi-vendor platform interface based on Small Cell Forum’s Functional Application Platform Interface (FAPI) standard. A configurable and dynamically loadable Interface Module (IF-Module) is designed between OAI’s MAC and PHY layers. Primitives and related code structures are presented as well as corresponding Data and Configuration’s procedures. Finally, the convergence of both NB-IoT and FAPI requirements lead to re-design PHY layer mechanisms for which a downlink transmission scheme is proposed

    Low-Complexity Multicarrier Waveform Processing Schemes fo Future Wireless Communications

    Get PDF
    Wireless communication systems deliver enormous variety of services and applications. Nowa- days, wireless communications play a key-role in many fields, such as industry, social life, education, and home automation. The growing demand for wireless services and applications has motivated the development of the next generation cellular radio access technology called fifth-generation new radio (5G-NR). The future networks are required to magnify the delivered user data rates to gigabits per second, reduce the communication latency below 1 ms, and en- able communications for massive number of simple devices. Those main features of the future networks come with new demands for the wireless communication systems, such as enhancing the efficiency of the radio spectrum use at below 6 GHz frequency bands, while supporting various services with quite different requirements for the waveform related key parameters. The current wireless systems lack the capabilities to handle those requirements. For exam- ple, the long-term evolution (LTE) employs the cyclic-prefix orthogonal frequency-division multiplexing (CP-OFDM) waveform, which has critical drawbacks in the 5G-NR context. The basic drawback of CP-OFDM waveform is the lack of spectral localization. Therefore, spectrally enhanced variants of CP-OFDM or other multicarrier waveforms with well localized spectrum should be considered. This thesis investigates spectrally enhanced CP-OFDM (E-OFDM) schemes to suppress the out-of-band (OOB) emissions, which are normally produced by CP-OFDM. Commonly, the weighted overlap-and-add (WOLA) scheme applies smooth time-domain window on the CP- OFDM waveform, providing spectrally enhanced subcarriers and reducing the OOB emissions with very low additional computational complexity. Nevertheless, the suppression perfor- mance of WOLA-OFDM is not sufficient near the active subband. Another technique is based on filtering the CP-OFDM waveform, which is referred to as F-OFDM. F-OFDM is able to provide well-localized spectrum, however, with significant increase in the computational com- plexity in the basic scheme with time-domain filters. Also filter-bank multicarrier (FBMC) waveforms are included in this study. FBMC has been widely studied as a potential post- OFDM scheme with nearly ideal subcarrier spectrum localization. However, this scheme has quite high computational complexity while being limited to uniformly distributed sub- bands. Anyway, filter-bank based waveform processing is one of the main topics of this work. Instead of traditional polyphase network (PPN) based uniform filter banks, the focus is on fast-convolution filter banks (FC-FBs), which utilize fast Fourier transform (FFT) domain processing to realize effectively filter-banks with high flexibility in terms of subcarrier bandwidths and center frequencies. FC-FBs are applied for both FBMC and F-OFDM waveform genera- tion and processing with greatly increased flexibility and significantly reduced computational complexity. This study proposes novel structures for FC-FB processing based on decomposition of the FC-FB structure consisting of forward and inverse discrete Fourier transforms (DFT and IDFT). The decomposition of multirate FC provides means of reducing the computational complexity in some important specific scenarios. A generic FC decomposition model is proposed and analyzed. This scheme is mathematically equivalent to the corresponding direct FC imple- mentation, with exactly the same performance. The benefits of the optimized decomposition structure appear mainly in communication scenarios with relatively narrow active transmis- sion band, resulting in significantly reduced computational complexity compared to the direct FC structure. The narrowband scenarios find their places in the recent 3GPP specification of cellular low- power wide-area (LPWA) access technology called narrowband internet-of-things (NB-IoT). NB-IoT aims at introducing the IoT to LTE and GSM frequency bands in coexistence with those technologies. NB-IoT uses CP-OFDM based waveforms with parameters compatible with the LTE. However, additional means are needed also for NB-IoT transmitters to improve the spec- trum localization. For NB-IoT user devices, it is important to consider ultra-low complexity solutions, and a look-up table (LUT) based approach is proposed to implement NB-IoT uplink transmitters with filtered waveforms. This approach provides completely multiplication-free digital baseband implementations and the addition rates are similar or smaller than in the basic NB-IoT waveform generation without the needed elements for spectrum enhancement. The basic idea includes storing full or partial waveforms for all possible data symbol combinations. Then the transmitted waveform is composed through summation of needed stored partial waveforms and trivial phase rotations. The LUT based scheme is developed with different vari- ants tackling practical implementations issues of NB-IoT device transmitters, considering also the effects of nonlinear power amplifier. Moreover, a completely multiplication and addition- free LUT variant is proposed and found to be feasible for very narrowband transmission, with up to 3 subcarriers. The finite-wordlength performance of LUT variants is evaluated through simulations

    Study of the 5G NB-IoT protocol with low density LEO Constellations of nanosatellites

    Get PDF
    The NB-IoT protocol, specified by 3GPP, is one of most popular and widely used technology for low-power wide-area (LPWA) networks. To further strengthen the potential of this technology, 3GPP is currently developing an extension of the NB-IoT protocol for non-terrestrial networks (NTN), so that terrestrial coverage could be extended using satellite-based network deployments and reach global coverage. The first part of this Master's Thesis focuses on the development of a MATLAB simulation software for the characterization of a NB-IoT NTN deployment scenario in terms of satellite coverage footprint (e.g. SNR distributions) and dynamics of the satellite link during a satellite pass (e.g. time evolution of the SNR and Doppler).Among the simulator inputs, there are the satellite height, the spherical geometry of the earth, the parameters associated with the satellite, such as orbit or speed, the transmission power, frequency, pathloss, etc... The simulator allows selecting the different inputs such as NTN parameters, link budget parameters or antenna type. These inputs, which are completely configurable, are used to obtain a set of outputs that allow to characterize the NB-IoT NTN scenario, such as the characterization of the satellite coverage footprint, the antenna pointing or the characterization of the satellite pass. For each characterization, the different parameters and results obtained, such as SNR heatmaps, Doppler frequency or propagation delay, are studied in more detail. The second part of the study is aimed at evaluating the performance of the NB-IoT NTN protocol over a satellite link. For this purpose, different numerical simulations have been performed, to estimate the minimum SNR and achievable spectral efficiency of the protocol for different communication models channels (e.g. AWGN and TDL channels, frequency offsets), different protocol configurations (e.g. number of repetitions, modulation and coding schemes) as well as considering different channel estimators. The analysis has been conducted for both downlink and uplink data channels (e.g. NPDSCH and NPUSCH). Simulations of NPDSCH Block Error Rate (BLER) and NPUSCH Block Error Rate (BLER) from the MATLAB LTE toolbox, modified and adapted to non-terrestrial communications with LEO satellites, are performed

    Long Range Low Power Wireless Communication Technologies for the IoT

    Get PDF
    The Internet of Things addresses a huge set of possible application domains, requiring both short- and long-range communication technologies. When long distances are present, a number of proprietary and standard solutions for Low Power Wide Area Networks are already available. Among them, LoRaWAN and NB-IoT are candidate technologies supported by many network operators. LoRaWAN is one of the first technologies defined to operate in unlicensed bands. Its simple access protocol is designed to avoid complexity and costs while maximising the transmission range. The proprietary modulation used is very robust with respect to the interferers present in the shared bands used. NB-IoT is a new radio access technology targeting a large set of use cases for massive machine-type communications standardised by the 3GPP. NB-IoT has been enhanced in terms of coverage and power saving capabilities while reducing the complexity at the same time. In this thesis, first, many typical applications that may benefit from these technologies are presented, with a focus on the performance metrics and the definition of the scenario and traffic pattern. Secondly, the LoRaWAN technology is assessed both experimentally and through simulations, to characterise it from the link-level and system-level viewpoint, with the target of estimating the capacity of a LoRaWAN gateway and a multi-gateway network to serve a large area. Then, this thesis provides an overview of NB-IoT, together with a mathematical model of the network able to predict the maximum performance in a given scenario with a specific configuration of some design parameters. This model is used to study how these parameters affect the overall performance and how the optimal configuration may be chosen according to arbitrary criteria. Finally, some projects and practical activities are presented to prove the need for these standards, and to share the know-how that was developed during these studies

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC) and Ultra-Reliable and Low Latency Communications (URLLC), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include Quality of Service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenario along with the recent advances towards enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions
    corecore