835 research outputs found

    Massive Non-Orthogonal Multiple Access for Cellular IoT: Potentials and Limitations

    Full text link
    The Internet of Things (IoT) promises ubiquitous connectivity of everything everywhere, which represents the biggest technology trend in the years to come. It is expected that by 2020 over 25 billion devices will be connected to cellular networks; far beyond the number of devices in current wireless networks. Machine-to-Machine (M2M) communications aims at providing the communication infrastructure for enabling IoT by facilitating the billions of multi-role devices to communicate with each other and with the underlying data transport infrastructure without, or with little, human intervention. Providing this infrastructure will require a dramatic shift from the current protocols mostly designed for human-to-human (H2H) applications. This article reviews recent 3GPP solutions for enabling massive cellular IoT and investigates the random access strategies for M2M communications, which shows that cellular networks must evolve to handle the new ways in which devices will connect and communicate with the system. A massive non-orthogonal multiple access (NOMA) technique is then presented as a promising solution to support a massive number of IoT devices in cellular networks, where we also identify its practical challenges and future research directions.Comment: To appear in IEEE Communications Magazin

    Probabilistic Rateless Multiple Access for Machine-to-Machine Communication

    Get PDF
    Future machine to machine (M2M) communications need to support a massive number of devices communicating with each other with little or no human intervention. Random access techniques were originally proposed to enable M2M multiple access, but suffer from severe congestion and access delay in an M2M system with a large number of devices. In this paper, we propose a novel multiple access scheme for M2M communications based on the capacity-approaching analog fountain code to efficiently minimize the access delay and satisfy the delay requirement for each device. This is achieved by allowing M2M devices to transmit at the same time on the same channel in an optimal probabilistic manner based on their individual delay requirements. Simulation results show that the proposed scheme achieves a near optimal rate performance and at the same time guarantees the delay requirements of the devices. We further propose a simple random access strategy and characterized the required overhead. Simulation results show the proposed approach significantly outperforms the existing random access schemes currently used in long term evolution advanced (LTE-A) standard in terms of the access delay.Comment: Accepted to Publish in IEEE Transactions on Wireless Communication

    Protocol for Extreme Low Latency M2M Communication Networks

    Get PDF
    As technology evolves, more Machine to Machine (M2M) deployments and mission critical services are expected to grow massively, generating new and diverse forms of data traffic, posing unprecedented challenges in requirements such as delay, reliability, energy consumption and scalability. This new paradigm vindicates a new set of stringent requirements that the current mobile networks do not support. A new generation of mobile networks is needed to attend to this innovative services and requirements - the The fifth generation of mobile networks (5G) networks. Specifically, achieving ultra-reliable low latency communication for machine to machine networks represents a major challenge, that requires a new approach to the design of the Physical (PHY) and Medium Access Control (MAC) layer to provide these novel services and handle the new heterogeneous environment in 5G. The current LTE Advanced (LTE-A) radio access network orthogonality and synchronization requirements are obstacles for this new 5G architecture, since devices in M2M generate bursty and sporadic traffic, and therefore should not be obliged to follow the synchronization of the LTE-A PHY layer. A non-orthogonal access scheme is required, that enables asynchronous access and that does not degrade the spectrum. This dissertation addresses the requirements of URLLC M2M traffic at the MAC layer. It proposes an extension of the M2M H-NDMA protocol for a multi base station scenario and a power control scheme to adapt the protocol to the requirements of URLLC. The system and power control schemes performance and the introduction of more base stations are analyzed in a system level simulator developed in MATLAB, which implements the MAC protocol and applies the power control algorithm. Results showed that with the increase in the number of base stations, delay can be significantly reduced and the protocol supports more devices without compromising delay or reliability bounds for Ultra-Reliable and Low Latency Communication (URLLC), while also increasing the throughput. The extension of the protocol will enable the study of different power control algorithms for more complex scenarios and access schemes that combine asynchronous and synchronous access

    A COMPREHENSIVE REVIEW OF INTERNET OF THINGS WAVEFORMS FOR A DOD LOW EARTH ORBIT CUBESAT MESH NETWORK

    Get PDF
    The Department of Defense (DOD) requires the military to provide command and control during missions in locations where terrestrial communications infrastructure is unreliable or unavailable, which results in a high reliance on satellite communications (SATCOM). This is problematic because they use and consume more digital data in the operational environment. The DOD has several forms of data capable of meeting Internet of Things (IoT) transmission parameters that could be diversified onto an IoT network. This research assesses the potential for an IoT satellite constellation in Low Earth Orbit to provide an alternative, space-based communication platform to military units while offering increased overall SATCOM capacity and resiliency. This research explores alternative IoT waveforms and compatible transceivers in place of LoRaWAN for the NPS CENETIX Ortbial-1 CubeSat. The study uses a descriptive comparative research approach to simultaneously assess several variables. Five alternative waveforms—Sigfox, NB-IoT, LTE-M, Wi-sun, and Ingenu—are evaluated. NB-IoT, LTE-M, and Ingenu meet the threshold to be feasible alternatives to replace the LoRaWAN waveform in the Orbital-1 CubeSat. Six potential IoT transceivers are assessed as replacements. Two transceivers for the NB-IoT and LTE-M IoT waveforms and one transceiver from U-blox for the Ingenu waveform are assessed as compliant.Lieutenant, United States NavyApproved for public release. Distribution is unlimited

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Drone Base Station Trajectory Management for Optimal Scheduling in LTE-Based Sparse Delay-Sensitive M2M Networks

    Get PDF
    Providing connectivity in areas out of reach of the cellular infrastructure is a very active area of research. This connectivity is particularly needed in case of the deployment of machine type communication devices (MTCDs) for critical purposes such as homeland security. In such applications, MTCDs are deployed in areas that are hard to reach using regular communications infrastructure while the collected data is timely critical. Drone-supported communications constitute a new trend in complementing the reach of the terrestrial communication infrastructure. In this study, drones are used as base stations to provide real-time communication services to gather critical data out of a group of MTCDs that are sparsely deployed in a marine environment. Studying different communication technologies as LTE, WiFi, LPWAN and Free-Space Optical communication (FSOC) incorporated with the drone communications was important in the first phase of this research to identify the best candidate for addressing this need. We have determined the cellular technology, and particularly LTE, to be the most suitable candidate to support such applications. In this case, an LTE base station would be mounted on the drone which will help communicate with the different MTCDs to transmit their data to the network backhaul. We then formulate the problem model mathematically and devise the trajectory planning and scheduling algorithm that decides the drone path and the resulting scheduling. Based on this formulation, we decided to compare between an Ant Colony Optimization (ACO) based technique that optimizes the drone movement among the sparsely-deployed MTCDs and a Genetic Algorithm (GA) based solution that achieves the same purpose. This optimization is based on minimizing the energy cost of the drone movement while ensuring the data transmission deadline missing is minimized. We present the results of several simulation experiments that validate the different performance aspects of the technique

    LTE network slicing and resource trading schemes for machine-to-machine communications

    Get PDF
    The Internet of Things (IoT) is envisioned as the future of human-free communications. IoT relies on Machine-to-Machine (M2M) communications rather than conventional Human-to-Human (H2H) communications. It is expected that billions of Machine Type Communication Devices (MTCDs) will be connected to the Internet in the near future. Consequently, the mobile data traffic is poised to increase dramatically. Long Term Evolution (LTE) and its subsequent technology LTE-Advanced (LTE-A) are the candidate carriers of M2M communications for the IoT purposes. Despite the significant increase of traffic due to IoT, the Mobile Network Operators (MNOs) revenues are not increasing at the same pace. Hence, many MNOs have resorted to sharing their radio resources and parts of their infrastructures, in what is known as Network Virtualization (NV). In the thesis, we focus on slicing in which an operator known as Mobile Virtual Network Operator (MVNO), does not own a spectrum license or mobile infrastructure, and relies on a larger MNO to serve its users. The large licensed MNO divides its spectrum pool into slices. Each MVNO reserves one or more slice(s). There are 2 forms of slice scheduling: Resource-based in which the slices are assigned a portion of radio resources or Data rate-based in which the slices are assigned a certain bandwidth. In the first part of this thesis we present different approaches for adapting resource-based NV, Data rate-based NV to Machine Type Communication (MTC). This will be done in such a way that resources are allocated to each slice depending on the delay budget of the MTCDs deployed in the slice and their payloads. The adapted NV schemes are then simulated and compared to the Static Reservation (SR) of radio resources. They have all shown an improved performance over SR from deadline missing perspective. In the second part of the thesis, we introduce a novel resource trading scheme that allows sharing operators to trade their radio resources based on the varying needs of their clients with time. The Genetic Algorithm (GA) is used to optimize the resource trading among the virtual operators. The proposed trading scheme is simulated and compared to the adapted schemes from the first part of the thesis. The novel trading scheme has shown to achieve significantly better performance compared to the adapted schemes

    Design and Implementation of a Narrow-Band Intersatellite Network with Limited Onboard Resources for IoT

    Get PDF
    Satellite networks are inevitable for the ubiquitous connectivity of M2M (machine to machine) and IoT (internet of things) devices. Advances in the miniaturization of satellite technology make networks in LEO (Low Earth Orbit) predestined to serve as a backhaul for narrow-band M2M communication. To reduce latency and increase network responsivity, intersatellite link capability among nodes is a key component in satellite design. The miniaturization of nodes to enable the economical deployment of large networks is also crucial. Thus, this article addresses these key issues and presents a design methodology and implementation of an adaptive network architecture considering highly limited resources, as is the case in a nanosatellite (≈10 kg) network. Potentially applicable multiple access techniques are evaluated. The results show that a time division duplex scheme with session-oriented P2P (point to point) protocols in the data link layer is more suitable for limited resources. Furthermore, an applicable layer model is defined and a protocol implementation is outlined. To demonstrate the technical feasibility of a nanosatellite-based communication network, the S-NET (S band network with nanosatellites) mission has been developed, which consists of four nanosatellites, to demonstrate multi-point crosslink with 100 kbps data rates over distances up to 400 km and optimized communication protocols, pushing the technological boundaries of nanosatellites. The flight results of S-NET prove the feasibility of these nanosatellites as a space-based M2M backhaul.BMWi, 50YB1225, S-Band Netzwerk für kooperierende SatellitenBMWi, 50YB1009, SLink - S-Band Transceiver zur Intersatelliten-Kommunikation von NanosatellitenDFG, 414044773, Open Access Publizieren 2019 - 2020 / Technische Universität Berli

    Time- and frequency-asynchronous aloha for ultra narrowband communications

    Get PDF
    A low-power wide-area network (LPWAN) is a family of wireless access technologies which consume low power and cover wide areas. They are designed to operate in both licensed and unlicensed frequency bands. Among different low-power wide-area network (LPWAN) technolo-gies, long range (LoRa), Sigfox, and Narrowband Internet of Things (NB-IoT) are leading in IoT deployment in large-scale. However, Sigfox and LoRa both have advantages in terms of battery lifetime, production cost and capacity whereas lower latency and better quality of service are of-fered by Narrowband Internet of Things (NB-IoT) operating licensed cellular frequency bands. The two main approaches for reaching wide coverage with low transmission power are (i) spread spectrum, used by LoRa, and (ii) ultra-narrow band (UNB) which is used by Sigfox. This thesis work focuses on the random-access schemes for UNB based IoT networks mainly. Due to issues related to receiver synchronization, two-dimensional time-frequency ran-dom access protocol is a particularly interesting choice for UNB transmission schemes. Howev-er, UNB possess also some major constraints regarding connectivity, throughput, noise cancel-lation and so. This thesis work investigates UNB-based LPWAN uplink scenarios. The throughput perfor-mance of Time Frequency Asynchronous ALOHA (TFAA) is evaluated using MATLAB simula-tions. The main parameters include the interference threshold which depends on the robust-ness of the modulation and coding scheme, propagation exponent, distance range of the IoT devices and system load. Normalized throughput and collision probability are evaluated through simulations for different combinations of these parameters. We demonstrate that, using repeti-tions of the data packets results in a higher normalized throughput. The repetition scheme is designed in such a way that another user's packets may collide only with one of the target packets repetitions. The power levels as well as distances of a user’s all repetitions are consid-ered same. By using repetitions, reducing the distance range, and increasing the interference threshold, the normalized throughput can be maximized
    corecore