6 research outputs found
Modeling, Analysis, and Optimization of Grant-Free NOMA in Massive MTC via Stochastic Geometry
Massive machine-type communications (mMTC) is a crucial scenario to support
booming Internet of Things (IoTs) applications. In mMTC, although a large
number of devices are registered to an access point (AP), very few of them are
active with uplink short packet transmission at the same time, which requires
novel design of protocols and receivers to enable efficient data transmission
and accurate multi-user detection (MUD). Aiming at this problem, grant-free
non-orthogonal multiple access (GF-NOMA) protocol is proposed. In GF-NOMA,
active devices can directly transmit their preambles and data symbols
altogether within one time frame, without grant from the AP. Compressive
sensing (CS)-based receivers are adopted for non-orthogonal preambles
(NOP)-based MUD, and successive interference cancellation is exploited to
decode the superimposed data signals. In this paper, we model, analyze, and
optimize the CS-based GF-MONA mMTC system via stochastic geometry (SG), from an
aspect of network deployment. Based on the SG network model, we first analyze
the success probability as well as the channel estimation error of the CS-based
MUD in the preamble phase and then analyze the average aggregate data rate in
the data phase. As IoT applications highly demands low energy consumption, low
infrastructure cost, and flexible deployment, we optimize the energy efficiency
and AP coverage efficiency of GF-NOMA via numerical methods. The validity of
our analysis is verified via Monte Carlo simulations. Simulation results also
show that CS-based GF-NOMA with NOP yields better MUD and data rate
performances than contention-based GF-NOMA with orthogonal preambles and
CS-based grant-free orthogonal multiple access.Comment: This paper is submitted to IEEE Internet Of Things Journa
Time and Energy Constrained Large-Scale IoT Networks: The Feedback Dilemma
Closed-loop rate adaptation and error-control depends on the availability of
feedback, which is necessary to maintain efficient and reliable wireless links.
In the 6G era, many Internet of Things (IoT) devices may not be able to support
feedback transmissions due to stringent energy constraints. This calls for new
transmission techniques and design paradigms to maintain reliability in
feedback-free IoT networks. In this context, this paper proposes a novel
open-loop rate adaptation (OLRA) scheme for reliable feedback-free IoT
networks. In particular, large packets are fragmented to operate at a reliable
transmission rate. Furthermore, transmission of each fragment is repeated
several times to improve the probability of successful delivery. Using tools
from stochastic geometry and queueing theory, we develop a novel spatiotemporal
framework to determine the number of fragments and repetitions needed to
optimize the network performance in terms of transmission reliability and
latency. To this end, the proposed OLRA is bench-marked against conventional
closed-loop rate adaptation (CLRA) to highlight the impact of feedback in
large-scale IoT networks. The obtained results concretely quantify the energy
saving of the proposed feedback-free OLRA scheme at the cost of transmission
reliability reduction and latency increment
URLLC for 5G and Beyond: Requirements, Enabling Incumbent Technologies and Network Intelligence
The tactile internet (TI) is believed to be the prospective advancement of the internet of things (IoT), comprising human-to-machine and machine-to-machine communication. TI focuses on enabling real-time interactive techniques with a portfolio of engineering, social, and commercial use cases. For this purpose, the prospective 5{th} generation (5G) technology focuses on achieving ultra-reliable low latency communication (URLLC) services. TI applications require an extraordinary degree of reliability and latency. The 3{rd} generation partnership project (3GPP) defines that URLLC is expected to provide 99.99% reliability of a single transmission of 32 bytes packet with a latency of less than one millisecond. 3GPP proposes to include an adjustable orthogonal frequency division multiplexing (OFDM) technique, called 5G new radio (5G NR), as a new radio access technology (RAT). Whereas, with the emergence of a novel physical layer RAT, the need for the design for prospective next-generation technologies arises, especially with the focus of network intelligence. In such situations, machine learning (ML) techniques are expected to be essential to assist in designing intelligent network resource allocation protocols for 5G NR URLLC requirements. Therefore, in this survey, we present a possibility to use the federated reinforcement learning (FRL) technique, which is one of the ML techniques, for 5G NR URLLC requirements and summarizes the corresponding achievements for URLLC. We provide a comprehensive discussion of MAC layer channel access mechanisms that enable URLLC in 5G NR for TI. Besides, we identify seven very critical future use cases of FRL as potential enablers for URLLC in 5G NR
Channel Access Management for Massive Cellular IoT Applications
As part of the steps taken towards improving the quality of life, many of everyday life activities as well as technological advancements are relying more and more on smart devices. In the future, it is expected that every electric device will be a smart device that can be connected to the internet. This gives rise to the new network paradigm known as the massive cellular IoT, where a large number of simple battery powered heterogeneous devices are collectively working for the betterment of humanity in all aspects. However, different from the traditional cellular based communication networks, IoT applications produce uplink-heavy data traffic that is composed of a large number of small data packets with different quality of service (QoS) requirements. These unique characteristics pose as a challenge to the current cellular channel access process and, hence, new and revolutionary access mechanisms are much needed. These access mechanisms need to be cost-effective, enable the support of massive number of devices, scalable, practical, and energy and radio resource efficient. Furthermore, due to the low computational capabilities of the devices, they cannot handle heavy networking intelligence and, thus, the designed channel access should be simple and light. Accordingly, in this research, we evaluate the suitability of the current channel access mechanism for massive applications and propose an energy efficient and resource preserving clustering and data aggregation solution. The proposed solution is tailored to the needs of future IoT applications.
First, we recognize that for many anticipated cellular IoT applications, providing energy efficient and delay-aware access is crucial. However, in cellular networks, before devices transmit their data, they use a contention-based association protocol, known as random access channel procedure (RACH), which introduces extensive access delays and energy wastage as the number of contending devices increases. Modeling the performance of the RACH protocol is a challenging task due to the complexity of uplink transmission that exhibits a wide range of interference components; nonetheless, it is an essential process that helps determine the applicability of cellular IoT communication paradigm and shed light on the main challenges. Consequently, we develop a novel mathematical framework based on stochastic geometry to evaluate the RACH protocol and identify its limitations in the context of cellular IoT applications with a massive number of devices. To do so, we study the traditional cellular association process and establish a mathematical model for its association success probability. The model accounts for device density, spatial characteristics of the network, power control employed, and mutual interference among the devices. Our analysis and results highlight the shortcomings of the RACH protocol and give insights into the potentials brought on by employing power control techniques.
Second, based on the analysis of the RACH procedure, we determine that, as the number of devices increases, the contention over the limited network radio resources increases, leading to network congestion. Accordingly, to avoid network congestion while supporting a large number of devices, we propose to use node clustering and data aggregation.
As the number of supported devices increases and their QoS requirements become vast, optimizing node clustering and data aggregation processes becomes critical to be able to handle the many trade-offs that arise among different network performance metrics. Furthermore, for cost effectiveness, we propose that the data aggregator nodes be cellular devices and thus it is desirable to keep the number of aggregators to minimum such that we avoid congesting the RACH channel, while maximizing the number of successfully supported devices. Consequently, to tackle these issues, we explore the possibility of combining data aggregation and non-orthogonal multiple access (NOMA) where we propose a novel two-hop NOMA-enabled network architecture. Concepts from queuing theory and stochastic geometry are jointly exploited to derive mathematical expressions for different network performance metrics such as coverage probability, two-hop access delay, and the number of served devices per transmission frame. The established models characterize relations among various network metrics, and hence facilitate the design of two-stage transmission architecture. Numerical results demonstrate that the proposed solution improves the overall access delay and energy efficiency as compared to traditional OMA-based clustered networks.
Last, we recognize that under the proposed two-hop network architecture, devices are subject to access point association decisions, i.e., to which access point a device associates plays a major role in determining the overall network performance and the perceived service by the devices. Accordingly, in the third part of the work, we consider the optimization of the two-hop network from the point of view of user association such that the number of QoS satisfied devices is maximized while minimizing the overall device energy consumption. We formulate the problem as a joint access point association, resources utilization, and energy efficient communication optimization problem that takes into account various networking factors such as the number of devices, number of data aggregators, number of available resource units, interference, transmission power limitation of the devices, aggregator transmission performance, and channel conditions. The objective is to show the usefulness of data aggregation and shed light on the importance of network design when the number of devices is massive. We propose a coalition game theory based algorithm, PAUSE, to transform the optimization problem into a simpler form that can be successfully solved in polynomial time. Different network scenarios are simulated to showcase the effectiveness of PAUSE and to draw observations on cost effective data aggregation enabled two-hop network design