132 research outputs found
Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions
The ever-increasing number of resource-constrained Machine-Type Communication
(MTC) devices is leading to the critical challenge of fulfilling diverse
communication requirements in dynamic and ultra-dense wireless environments.
Among different application scenarios that the upcoming 5G and beyond cellular
networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the
unique technical challenge of supporting a huge number of MTC devices, which is
the main focus of this paper. The related challenges include QoS provisioning,
handling highly dynamic and sporadic MTC traffic, huge signalling overhead and
Radio Access Network (RAN) congestion. In this regard, this paper aims to
identify and analyze the involved technical issues, to review recent advances,
to highlight potential solutions and to propose new research directions. First,
starting with an overview of mMTC features and QoS provisioning issues, we
present the key enablers for mMTC in cellular networks. Along with the
highlights on the inefficiency of the legacy Random Access (RA) procedure in
the mMTC scenario, we then present the key features and channel access
mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT.
Subsequently, we present a framework for the performance analysis of
transmission scheduling with the QoS support along with the issues involved in
short data packet transmission. Next, we provide a detailed overview of the
existing and emerging solutions towards addressing RAN congestion problem, and
then identify potential advantages, challenges and use cases for the
applications of emerging Machine Learning (ML) techniques in ultra-dense
cellular networks. Out of several ML techniques, we focus on the application of
low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss
some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future
publication in IEEE Communications Surveys and Tutorial
The Four-C Framework for High Capacity Ultra-Low Latency in 5G Networks: A Review
Network latency will be a critical performance metric for the Fifth Generation (5G) networks
expected to be fully rolled out in 2020 through the IMT-2020 project. The multi-user multiple-input
multiple-output (MU-MIMO) technology is a key enabler for the 5G massive connectivity criterion,
especially from the massive densification perspective. Naturally, it appears that 5G MU-MIMO will
face a daunting task to achieve an end-to-end 1 ms ultra-low latency budget if traditional network
set-ups criteria are strictly adhered to. Moreover, 5G latency will have added dimensions of scalability
and flexibility compared to prior existing deployed technologies. The scalability dimension caters
for meeting rapid demand as new applications evolve. While flexibility complements the scalability
dimension by investigating novel non-stacked protocol architecture. The goal of this review paper
is to deploy ultra-low latency reduction framework for 5G communications considering flexibility
and scalability. The Four (4) C framework consisting of cost, complexity, cross-layer and computing
is hereby analyzed and discussed. The Four (4) C framework discusses several emerging new
technologies of software defined network (SDN), network function virtualization (NFV) and fog
networking. This review paper will contribute significantly towards the future implementation of
flexible and high capacity ultra-low latency 5G communications
Congestion Control for Massive Machine-Type Communications: Distributed and Learning-Based Approaches
The Internet of things (IoT) is going to shape the future of wireless communications by allowing seamless connections among wide range of everyday objects. Machine-to-machine (M2M) communication is known to be the enabling technology for the development of IoT. With M2M, the devices are allowed to interact and exchange data without or with little human intervention. Recently, M2M communication, also referred to as machine-type communication (MTC), has received increased attention due to its potential to support diverse applications including eHealth, industrial automation, intelligent transportation systems, and smart grids.
M2M communication is known to have specific features and requirements that differ from that of the traditional human-to-human (H2H) communication. As specified by the Third Generation Partnership Project (3GPP), MTC devices are inexpensive, low power, and mostly low mobility devices. Furthermore, MTC devices are usually characterized by infrequent, small amount of data, and mainly uplink traffic. Most importantly, the number of MTC devices is expected to highly surpass that of H2H devices. Smart cities are an example of such a mass-scale deployment. These features impose various challenges related to efficient energy management, enhanced coverage and diverse quality of service (QoS) provisioning, among others.
The diverse applications of M2M are going to lead to exponential growth in M2M traffic. Associating with M2M deployment, a massive number of devices are expected to access the wireless network concurrently. Hence, a network congestion is likely to occur. Cellular networks have been recognized as excellent candidates for M2M support. Indeed, cellular networks are mature, well-established networks with ubiquitous coverage and reliability which allows cost-effective deployment of M2M communications. However, cellular networks were originally designed for human-centric services with high-cost devices and ever-increasing rate requirements. Additionally, the conventional random access (RA) mechanism used in Long Term Evolution-Advanced (LTE-A) networks lacks the capability of handling such an enormous number of access attempts expected from massive MTC. Particularly, this RA technique acts as a performance bottleneck due to the frequent collisions that lead to excessive delay and resource wastage. Also, the lengthy handshaking process of the conventional RA technique results in highly expensive signaling, specifically for M2M devices with small payloads. Therefore, designing an efficient medium access schemes is critical for the survival of M2M networks.
In this thesis, we study the uplink access of M2M devices with a focus on overload control and congestion handling. In this regard, we mainly provide two different access techniques keeping in mind the distinct features and requirements of MTC including massive connectivity, latency reduction, and energy management. In fact, full information gathering is known to be impractical for such massive networks of tremendous number of devices. Hence, we assure to preserve the low complexity, and limited information exchange among different network entities by introducing distributed techniques. Furthermore, machine learning is also employed to enhance the performance with no or limited information exchange at the decision maker. The proposed techniques are assessed via extensive simulations as well as rigorous analytical frameworks.
First, we propose an efficient distributed overload control algorithm for M2M with massive access, referred to as M2M-OSA. The proposed algorithm can efficiently allocate the available network resources to massive number of devices within relatively small, and bounded contention time and with reduced overhead. By resolving collisions, the proposed algorithm is capable of achieving full resources utilization along with reduced average access delay and energy saving. For Beta-distributed traffic, we provide analytical evaluation for the performance of the proposed algorithm in terms of the access delay, total service time, energy consumption, and blocking probability. This performance assessment accounted for various scenarios including slightly, and seriously congested cases, in addition to finite and infinite retransmission limits for the devices. Moreover, we provide a discussion of the non-ideal situations that could be encountered in real-life deployment of the proposed algorithm supported by possible solutions. For further energy saving, we introduced a modified version of M2M-OSA with traffic regulation mechanism.
In the second part of the thesis, we adopt a promising alternative for the conventional random access mechanism, namely fast uplink grant. Fast uplink grant was first proposed by the 3GPP for latency reduction where it allows the base station (BS) to directly schedule the MTC devices (MTDs) without receiving any scheduling requests. In our work, to handle the major challenges associated to fast uplink grant namely, active set prediction and optimal scheduling, both non-orthogonal multiple access (NOMA) and learning techniques are utilized. Particularly, we propose a two-stage NOMA-based fast uplink grant scheme that first employs multi-armed bandit (MAB) learning to schedule the fast grant devices with no prior information about their QoS requirements or channel conditions at the BS. Afterwards, NOMA facilitates the grant sharing where pairing is done in a distributed manner to reduce signaling overhead. In the proposed scheme, NOMA plays a major role in decoupling the two major challenges of fast grant schemes by permitting pairing with only active MTDs. Consequently, the wastage of the resources due to traffic prediction errors can be significantly reduced. We devise an abstraction model for the source traffic predictor needed for fast grant such that the prediction error can be evaluated. Accordingly, the performance of the proposed scheme is analyzed in terms of average resource wastage, and outage probability. The simulation results show the effectiveness of the proposed method in saving the scarce resources while verifying the analysis accuracy. In addition, the ability of the proposed scheme to pick quality MTDs with strict latency is depicted
A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions
The Internet has made several giant leaps over the years, from a fixed to a
mobile Internet, then to the Internet of Things, and now to a Tactile Internet.
The Tactile Internet goes far beyond data, audio and video delivery over fixed
and mobile networks, and even beyond allowing communication and collaboration
among things. It is expected to enable haptic communication and allow skill set
delivery over networks. Some examples of potential applications are
tele-surgery, vehicle fleets, augmented reality and industrial process
automation. Several papers already cover many of the Tactile Internet-related
concepts and technologies, such as haptic codecs, applications, and supporting
technologies. However, none of them offers a comprehensive survey of the
Tactile Internet, including its architectures and algorithms. Furthermore, none
of them provides a systematic and critical review of the existing solutions. To
address these lacunae, we provide a comprehensive survey of the architectures
and algorithms proposed to date for the Tactile Internet. In addition, we
critically review them using a well-defined set of requirements and discuss
some of the lessons learned as well as the most promising research directions
Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions
The ever-increasing number of resource-constrained
Machine-Type Communication (MTC) devices is leading to the
critical challenge of fulfilling diverse communication requirements
in dynamic and ultra-dense wireless environments. Among
different application scenarios that the upcoming 5G and beyond
cellular networks are expected to support, such as enhanced Mobile
Broadband (eMBB), massive Machine Type Communications
(mMTC) and Ultra-Reliable and Low Latency Communications
(URLLC), the mMTC brings the unique technical challenge of
supporting a huge number of MTC devices in cellular networks,
which is the main focus of this paper. The related challenges
include Quality of Service (QoS) provisioning, handling highly
dynamic and sporadic MTC traffic, huge signalling overhead and
Radio Access Network (RAN) congestion. In this regard, this
paper aims to identify and analyze the involved technical issues,
to review recent advances, to highlight potential solutions and to
propose new research directions. First, starting with an overview
of mMTC features and QoS provisioning issues, we present
the key enablers for mMTC in cellular networks. Along with
the highlights on the inefficiency of the legacy Random Access
(RA) procedure in the mMTC scenario, we then present the key
features and channel access mechanisms in the emerging cellular
IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT).
Subsequently, we present a framework for the performance
analysis of transmission scheduling with the QoS support along
with the issues involved in short data packet transmission. Next,
we provide a detailed overview of the existing and emerging
solutions towards addressing RAN congestion problem, and then
identify potential advantages, challenges and use cases for the
applications of emerging Machine Learning (ML) techniques in
ultra-dense cellular networks. Out of several ML techniques, we
focus on the application of low-complexity Q-learning approach
in the mMTC scenario along with the recent advances towards
enhancing its learning performance and convergence. Finally,
we discuss some open research challenges and promising future
research directions
- …