149 research outputs found
Traffic classification and prediction, and fast uplink grant allocation for machine type communications via support vector machines and long short-term memory
Abstract. The current random access (RA) allocation techniques suffer from congestion and high signaling overhead while serving machine type communication (MTC) applications. Therefore, 3GPP has introduced the need to use fast uplink grant (FUG) allocation. This thesis proposes a novel FUG allocation based on support vector machine (SVM) and long short-term memory (LSTM). First, MTC devices are prioritized using SVM classifier. Second, LSTM architecture is used to predict activation time of each device. Both results are used to achieve an efficient resource scheduler in terms of the average latency and total throughput. Furthermore, a set of correction techniques is introduced to overcome the classification and prediction errors. The Coupled Markov Modulated Poisson Process (CMMPP) traffic model is applied to compare the proposed FUG allocation to other existing allocation techniques. In addition, an extended traffic model based CMMPP is used to evaluate the proposed algorithm in a more dense network. Our simulation results show the proposed model outperforms the existing RA allocation schemes by achieving the highest throughput and the lowest access delay when serving the target massive and critical MTC applications
Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions
The ever-increasing number of resource-constrained Machine-Type Communication
(MTC) devices is leading to the critical challenge of fulfilling diverse
communication requirements in dynamic and ultra-dense wireless environments.
Among different application scenarios that the upcoming 5G and beyond cellular
networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the
unique technical challenge of supporting a huge number of MTC devices, which is
the main focus of this paper. The related challenges include QoS provisioning,
handling highly dynamic and sporadic MTC traffic, huge signalling overhead and
Radio Access Network (RAN) congestion. In this regard, this paper aims to
identify and analyze the involved technical issues, to review recent advances,
to highlight potential solutions and to propose new research directions. First,
starting with an overview of mMTC features and QoS provisioning issues, we
present the key enablers for mMTC in cellular networks. Along with the
highlights on the inefficiency of the legacy Random Access (RA) procedure in
the mMTC scenario, we then present the key features and channel access
mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT.
Subsequently, we present a framework for the performance analysis of
transmission scheduling with the QoS support along with the issues involved in
short data packet transmission. Next, we provide a detailed overview of the
existing and emerging solutions towards addressing RAN congestion problem, and
then identify potential advantages, challenges and use cases for the
applications of emerging Machine Learning (ML) techniques in ultra-dense
cellular networks. Out of several ML techniques, we focus on the application of
low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss
some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future
publication in IEEE Communications Surveys and Tutorial
Packet Scheduling Algorithms in LTE/LTE-A cellular Networks: Multi-agent Q-learning Approach
Spectrum utilization is vital for mobile operators. It ensures an efficient use of spectrum bands, especially when obtaining their license is highly expensive. Long Term Evolution (LTE), and LTE-Advanced (LTE-A) spectrum bands license were auctioned by the Federal Communication Commission (FCC) to mobile operators with hundreds of millions of dollars. In the first part of this dissertation, we study, analyze, and compare the QoS performance of QoS-aware/Channel-aware packet scheduling algorithms while using CA over LTE, and LTE-A heterogeneous cellular networks. This included a detailed study of the LTE/LTE-A cellular network and its features, and the modification of an open source LTE simulator in order to perform these QoS performance tests. In the second part of this dissertation, we aim to solve spectrum underutilization by proposing, implementing, and testing two novel multi-agent Q-learning-based packet scheduling algorithms for LTE cellular network. The Collaborative Competitive scheduling algorithm, and the Competitive Competitive scheduling algorithm. These algorithms schedule licensed users over the available radio resources and un-licensed users over spectrum holes. In conclusion, our results show that the spectrum band could be utilized by deploying efficient packet scheduling algorithms for licensed users, and can be further utilized by allowing unlicensed users to be scheduled on spectrum holes whenever they occur
Narrowband IoT: from the end device to the cloud. An experimental end-to-end study
This thesis is about a novel study and experimentation of a Cloud IoT application, communicating over a NB-IoT Italian network. So far there no been presented studies, which are about the interactions between the NB-IoT network and the cloud. This thesis not only fill this gap but also shows the use of Cognitive Services to interact, through the human voice, with the IoT application. Compared with other types of mobile networks, NB-IoT is the best choice
Recommended from our members
Neural network design for intelligent mobile network optimisation
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe mobile networks users’ demands for data services are increasing exponentially, this is due to two main factors: the first is the evolution of smart phones and their application, and the second is the emerging new technologies for internet of things, smart cities…etc, which keeps pumping more data into the network; ‘though most of the data routed in the current mobile network is non-live data’. This increasing of demands arise the necessity for the mobile network operators to keep improving their network to satisfy it, this improvement takes place via adding hardware or increasing the resources or a combination of both. The radio resources are strictly limited due to spectrum licensing and availability, therefore efficient spectrum utilization is a major goal to be achieved for both network operators and developers. Simultaneous and multiple channel access,and adding more cells to the network are ways used to increase the data exchanged between the network nodes. The current 4G mobile system is based on the Orthogonal Frequency Division Multiple Access (OFDMA) for accessing the medium and the intercell interference degrades the link quality at the cell edge, with the introduction of heterogeneity concept to the LTE in Release 10 of the 3GPP the handover process became even more complex. To mitigate the intercell interference at the cell edge, coordinated multipoint and carrier aggregation techniques are utilized for dual connectivity. This work is focused on designing and proposing enhancing features to improve network performance and sustainability, these features comprises of distributing small cells for data only transmission, handover schemes performance evaluation at cell edge with dual connectivity, and Artificial Intelligence technology for balancing and prediction. In the proposed model design the data and controls of the Small eNodeB (SeNodeB) are processed at the network edge using a Mobile Edge Computing (MEC) server and the SeNodeBs are used to boost services provided to the users, also the concept of caching data has been investigated, the caching units where implemented in different network levels. The proposed system and resource management are simulated using the OPNET modeller and evaluated through multiple scenarios with and without full load, the UE is reconfigured to accommodate dual connectivity and have two separate connections for uplink and downlink, while maintaining connection to the Macro cell via uplink, the downlink is dedicated for small cells when content is requested from the cache. The results clearly show that the proposed system can decrease the latency while the total throughput delivered by the network has highly improved when SeNodeBs are deployed in the system, rising throughput will incur the rise of overall capacity which leads to better services being provided to the users or more users to join and benefit from the network. Handover improvement is also considered in this work, with the help of two Artificial Intelligence (AI) entities better handover performance are achieved. Balanced load over the SeNodeBs results in less frequent handover, the proposed load balancer is based on artificial neural network clustering model with self-organizing map as a hidden layer, it’s trained to forecast the network condition and learn to reduce the number of handovers especially for the UEs at the cell edge by performing only necessary ones, and avoid handovers to the Macro cell for the downlink direction. The examined handovers concern the downlinks when routing non live video stored at the small cell’s cache, and a reduction in the frequent handovers was achieved when running the balancer. Keep revolving in the handover orbit, another way to preserve and utilize network resources is by predicting the handovers before they occur, and allocate the required data in the target SeNodeB, the predictor entity in the proposed system architecture combines the features of Radial Basis Function Neural Network and neural network time series tool to create and update prediction list from the system’s collected data and learn to predict the next SeNodeB to associate with. The prediction entity is simulated using MATLAB, and the results shows that the system was able to deliver up to 92% correct predictions for handovers which led to overall throughput improvement of 75%
Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions
The ever-increasing number of resource-constrained
Machine-Type Communication (MTC) devices is leading to the
critical challenge of fulfilling diverse communication requirements
in dynamic and ultra-dense wireless environments. Among
different application scenarios that the upcoming 5G and beyond
cellular networks are expected to support, such as enhanced Mobile
Broadband (eMBB), massive Machine Type Communications
(mMTC) and Ultra-Reliable and Low Latency Communications
(URLLC), the mMTC brings the unique technical challenge of
supporting a huge number of MTC devices in cellular networks,
which is the main focus of this paper. The related challenges
include Quality of Service (QoS) provisioning, handling highly
dynamic and sporadic MTC traffic, huge signalling overhead and
Radio Access Network (RAN) congestion. In this regard, this
paper aims to identify and analyze the involved technical issues,
to review recent advances, to highlight potential solutions and to
propose new research directions. First, starting with an overview
of mMTC features and QoS provisioning issues, we present
the key enablers for mMTC in cellular networks. Along with
the highlights on the inefficiency of the legacy Random Access
(RA) procedure in the mMTC scenario, we then present the key
features and channel access mechanisms in the emerging cellular
IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT).
Subsequently, we present a framework for the performance
analysis of transmission scheduling with the QoS support along
with the issues involved in short data packet transmission. Next,
we provide a detailed overview of the existing and emerging
solutions towards addressing RAN congestion problem, and then
identify potential advantages, challenges and use cases for the
applications of emerging Machine Learning (ML) techniques in
ultra-dense cellular networks. Out of several ML techniques, we
focus on the application of low-complexity Q-learning approach
in the mMTC scenario along with the recent advances towards
enhancing its learning performance and convergence. Finally,
we discuss some open research challenges and promising future
research directions
- …