330 research outputs found
Cognitive networking for next generation of cellular communication systems
This thesis presents a comprehensive study of cognitive networking for cellular networks with contributions that enable them to be more dynamic, agile, and efficient. To achieve this, machine learning (ML) algorithms, a subset of artificial intelligence, are employed to bring such cognition to cellular networks. More specifically, three major branches of ML, namely supervised, unsupervised, and reinforcement learning (RL), are utilised for various purposes: unsupervised learning is used for data clustering, while supervised learning is employed for predictions on future behaviours of networks/users. RL, on the other hand, is utilised for optimisation purposes due to its inherent characteristics of adaptability and requiring minimal knowledge of the environment.
Energy optimisation, capacity enhancement, and spectrum access are identified as primary design challenges for cellular networks given that they are envisioned to play crucial roles for 5G and beyond due to the increased demand in the number of connected devices as well as data rates. Each design challenge and its corresponding proposed solution are discussed thoroughly in separate chapters.
Regarding energy optimisation, a user-side energy consumption is investigated by considering Internet of things (IoT) networks. An RL based intelligent model, which jointly optimises the wireless connection type and data processing entity, is proposed. In particular, a Q-learning algorithm is developed, through which the energy consumption of an IoT device is minimised while keeping the requirement of the applications--in terms of response time and security--satisfied. The proposed methodology manages to result in 0% normalised joint cost--where all the considered metrics are combined--while the benchmarks performed 54.84% on average. Next, the energy consumption of radio access networks (RANs) is targeted, and a traffic-aware cell switching algorithm is designed to reduce the energy consumption of a RAN without compromising on the user quality-of-service (QoS). The proposed technique employs a SARSA algorithm with value function approximation, since the conventional RL methods struggle with solving problems with huge state spaces. The results reveal that up to 52% gain on the total energy consumption is achieved with the proposed technique, and the gain is observed to reduce when the scenario becomes more realistic.
On the other hand, capacity enhancement is studied from two different perspectives, namely mobility management and unmanned aerial vehicle (UAV) assistance.
Towards that end, a predictive handover (HO) mechanism is designed for mobility management in cellular networks by identifying two major issues of Markov chains based HO predictions. First, revisits--which are defined as a situation whereby a user visits the same cell more than once within the same day--are diagnosed as causing similar transition probabilities, which in turn increases the likelihood of making incorrect predictions. This problem is addressed with a structural change; i.e., rather than storing 2-D transition matrix, it is proposed to store 3-D one that also includes HO orders. The obtained results show that 3-D transition matrix is capable of reducing the HO signalling cost by up to 25.37%, which is observed to drop with increasing randomness level in the data set. Second, making a HO prediction with insufficient criteria is identified as another issue with the conventional Markov chains based predictors. Thus, a prediction confidence level is derived, such that there should be a lower bound to perform HO predictions, which are not always advantageous owing to the HO signalling cost incurred from incorrect predictions. The outcomes of the simulations confirm that the derived confidence level mechanism helps in improving the prediction accuracy by up to 8.23%.
Furthermore, still considering capacity enhancement, a UAV assisted cellular networking is considered, and an unsupervised learning-based UAV positioning algorithm is presented. A comprehensive analysis is conducted on the impacts of the overlapping footprints of multiple UAVs, which are controlled by their altitudes. The developed k-means clustering based UAV positioning approach is shown to reduce the number of users in outage by up to 80.47% when compared to the benchmark symmetric deployment.
Lastly, a QoS-aware dynamic spectrum access approach is developed in order to tackle challenges related to spectrum access, wherein all the aforementioned types of ML methods are employed. More specifically, by leveraging future traffic load predictions of radio access technologies (RATs) and Q-learning algorithm, a novel proactive spectrum sensing technique is introduced. As such, two different sensing strategies are developed; the first one focuses solely on sensing latency reduction, while the second one jointly optimises sensing latency and user requirements. In particular, the proposed Q-learning algorithm takes the future load predictions of the RATs and the requirements of secondary users--in terms of mobility and bandwidth--as inputs and directs the users to the spectrum of the optimum RAT to perform sensing. The strategy to be employed can be selected based on the needs of the applications, such that if the latency is the only concern, the first strategy should be selected due to the fact that the second strategy is computationally more demanding. However, by employing the second strategy, sensing latency is reduced while satisfying other user requirements. The simulation results demonstrate that, compared to random sensing, the first strategy decays the sensing latency by 85.25%, while the second strategy enhances the full-satisfaction rate, where both mobility and bandwidth requirements of the user are simultaneously satisfied, by 95.7%.
Therefore, as it can be observed, three key design challenges of the next generation of cellular networks are identified and addressed via the concept of cognitive networking, providing a utilitarian tool for mobile network operators to plug into their systems. The proposed solutions can be generalised to various network scenarios owing to the sophisticated ML implementations, which renders the solutions both practical and sustainable
A Survey of Anticipatory Mobile Networking: Context-Based Classification, Prediction Methodologies, and Optimization Techniques
A growing trend for information technology is to not just react to changes, but anticipate them as much as possible. This paradigm made modern solutions, such as recommendation systems, a ubiquitous presence in today's digital transactions. Anticipatory networking extends the idea to communication technologies by studying patterns and periodicity in human behavior and network dynamics to optimize network performance. This survey collects and analyzes recent papers leveraging context information to forecast the evolution of network conditions and, in turn, to improve network performance. In particular, we identify the main prediction and optimization tools adopted in this body of work and link them with objectives and constraints of the typical applications and scenarios. Finally, we consider open challenges and research directions to make anticipatory networking part of next generation networks
A survey of machine learning techniques applied to self organizing cellular networks
In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future
Prediction Quality of Service in 5G Networks
Την παραμονή της συνδεδεμένης και αυτοματοποιημένης κινητικότητας (CAM) με δυνατότητα 5G, εμφανίστηκαν οι απαιτητικές υπηρεσίες όχημα-σε-οτιδήποτε (V2X) για αυτοματοποιημένη και ασφαλέστερη οδήγηση. Οι απαιτήσεις που απορρέουν από αυτές τις υπηρεσίες δημιουργούν πολύ αυστηρές προκλήσεις για το δίκτυο κυρίως όσον αφορά τον βασικό δείκτη απόδοσης (KPI) καθυστέρησης από άκρο σε άκρο (end-to-end delay). Ταυτόχρονα, η τεχνητή νοημοσύνη (AI) που εμφανίζεται εντός του δικτύου, αποκαλύπτει μια
πληθώρα νέων δυνατοτήτων του δικτύου, να ενεργεί με προληπτικό τρόπο ως προς την
ικανοποίηση των προαναφερθεισών απαιτήσεων. Αυτή η πτυχιακή εργασία παρουσιάζει
έναν μηχανισμό πρόβλεψης ποιότητας υπηρεσιών (PreQoS), που υποστηρίζεται από τεχνητή νοημοσύνη, εστιάζει στις υπηρεσίες όχημα-σε-οτιδήποτε και είναι σε θέση να προβλέψει έγκαιρα συγκεκριμένες μετρήσεις ποιότητας υπηρεσίας. Παράδειγμα αυτών των υπηρεσιών είναι ο ρυθμός δεδομένων (data rate) και η καθυστέρηση στις ανερχόμενες
(uplink) και κατερχόμενες ζεύξεις (downlink) από άκρο σε άκρο, προκειμένου να προσφέρει το απαιτούμενο χρονικό παράθυρο στο δίκτυο για να κατανείμει αποτελεσματικότερα
τους πόρους του, καθώς και στις αντίστοιχες υπηρεσίες και εφαρμογές όχημα-σε-οτιδήποτε για την εκτέλεση των απαιτούμενων προσαρμογών. Η αξιολόγηση του προτεινόμενου μηχανισμού βασίζεται σε ένα ρεαλιστικό, προσομοιωμένο περιβάλλον όχημα-σε-οτιδήποτε που αποδεικνύει τη βιωσιμότητα και την εγκυρότητα μιας τέτοιας προσέγγισηςOn the eve of 5G-enabled Connected and Automated Mobility, challenging Vehicle-to-Everything services have emerged towards safer and automated driving. The requirements
that stem from those services pose very strict challenges to the network primarily with regard to the end-to-end delay and service reliability. At the same time, the in-network Artificial Intelligence that is emerging, reveals a plethora of novel capabilities of the network to
act in a proactive manner towards satisfying the aforementioned challenging requirements.
This work presents PreQoS, a predictive Quality of Service mechanism that focuses on
Vehicle-to-Everything services. PreQoS is able to timely predict specific Quality of Service
metrics, such as uplink and downlink data rate and end to-end delay, in order to offer the
required time window to the network to allocate more efficiently its resources. On top of
that, the proactive management of those resources enables the respective Vehicle-to-Everything services and applications to perform any potential Quality of Service-related required adaptations in advance. The evaluation of the proposed mechanism based on a realistic, simulated, Connected and Automated Mobility environment proves the viability and
validity of such an approach
Dynamic spectrum allocation following machine learning-based traffic predictions in 5G
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The popularity of mobile broadband connectivity continues to grow and thus, the future wireless networks are expected to serve a very large number of users demanding a huge capacity. Employing larger spectral bandwidth and installing more access points to enhance the capacity is not enough to tackle the stated challenge due to related costs and the interference issues involved. In this way, frequency resources are becoming one of the most valuable assets, which require proper utilization and fair distribution. Traditional frequency resource management strategies are often based on static approaches, and are agnostic to the instantaneous demand of the network. These static approaches tend to cause congestion in a few cells, whereas at the same time, might waste those precious resources on others. Therefore, such static approaches are not efficient enough to deal with the capacity challenge of the future network. Thus, in this paper we present a dynamic access-aware bandwidth allocation approach, which follows the dynamic traffic requirements of each cell and allocates the required bandwidth accordingly from a common spectrum pool, which gathers the entire system bandwidth. We perform the evaluation of our proposal by means of real network traffic traces. Evaluation results presented in this paper depict the performance gain of the proposed dynamic access-aware approach compared to two different traditional approaches in terms of utilization and served traffic. Moreover, to acquire knowledge about access network requirement, we present a machine learning-based approach, which predicts the state of the network, and is utilized to manage the available spectrum accordingly. Our comparative results show that, in terms of spectrum allocation accuracy and utilization efficiency, a well designed machine learning-based bandwidth allocation mechanism not only outperforms common static approaches, but even achieves the performance (with a relative error close to 0.04) of an ideal dynamic system with perfect knowledge of future traffic requirements.This work was supported in part by the EU Horizon 2020 Research and Innovation Program (5GAuRA) under Grant 675806, and in part by
the Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement from the Generalitat de Catalunya under
Grant 2017 SGR 376.Peer ReviewedPostprint (published version
Contextual Beamforming: Exploiting Location and AI for Enhanced Wireless Telecommunication Performance
The pervasive nature of wireless telecommunication has made it the foundation
for mainstream technologies like automation, smart vehicles, virtual reality,
and unmanned aerial vehicles. As these technologies experience widespread
adoption in our daily lives, ensuring the reliable performance of cellular
networks in mobile scenarios has become a paramount challenge. Beamforming, an
integral component of modern mobile networks, enables spatial selectivity and
improves network quality. However, many beamforming techniques are iterative,
introducing unwanted latency to the system. In recent times, there has been a
growing interest in leveraging mobile users' location information to expedite
beamforming processes. This paper explores the concept of contextual
beamforming, discussing its advantages, disadvantages and implications.
Notably, the study presents an impressive 53% improvement in signal-to-noise
ratio (SNR) by implementing the adaptive beamforming (MRT) algorithm compared
to scenarios without beamforming. It further elucidates how MRT contributes to
contextual beamforming. The importance of localization in implementing
contextual beamforming is also examined. Additionally, the paper delves into
the use of artificial intelligence schemes, including machine learning and deep
learning, in implementing contextual beamforming techniques that leverage user
location information. Based on the comprehensive review, the results suggest
that the combination of MRT and Zero forcing (ZF) techniques, alongside deep
neural networks (DNN) employing Bayesian Optimization (BO), represents the most
promising approach for contextual beamforming. Furthermore, the study discusses
the future potential of programmable switches, such as Tofino, in enabling
location-aware beamforming
Recommended from our members
Neural network design for intelligent mobile network optimisation
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe mobile networks users’ demands for data services are increasing exponentially, this is due to two main factors: the first is the evolution of smart phones and their application, and the second is the emerging new technologies for internet of things, smart cities…etc, which keeps pumping more data into the network; ‘though most of the data routed in the current mobile network is non-live data’. This increasing of demands arise the necessity for the mobile network operators to keep improving their network to satisfy it, this improvement takes place via adding hardware or increasing the resources or a combination of both. The radio resources are strictly limited due to spectrum licensing and availability, therefore efficient spectrum utilization is a major goal to be achieved for both network operators and developers. Simultaneous and multiple channel access,and adding more cells to the network are ways used to increase the data exchanged between the network nodes. The current 4G mobile system is based on the Orthogonal Frequency Division Multiple Access (OFDMA) for accessing the medium and the intercell interference degrades the link quality at the cell edge, with the introduction of heterogeneity concept to the LTE in Release 10 of the 3GPP the handover process became even more complex. To mitigate the intercell interference at the cell edge, coordinated multipoint and carrier aggregation techniques are utilized for dual connectivity. This work is focused on designing and proposing enhancing features to improve network performance and sustainability, these features comprises of distributing small cells for data only transmission, handover schemes performance evaluation at cell edge with dual connectivity, and Artificial Intelligence technology for balancing and prediction. In the proposed model design the data and controls of the Small eNodeB (SeNodeB) are processed at the network edge using a Mobile Edge Computing (MEC) server and the SeNodeBs are used to boost services provided to the users, also the concept of caching data has been investigated, the caching units where implemented in different network levels. The proposed system and resource management are simulated using the OPNET modeller and evaluated through multiple scenarios with and without full load, the UE is reconfigured to accommodate dual connectivity and have two separate connections for uplink and downlink, while maintaining connection to the Macro cell via uplink, the downlink is dedicated for small cells when content is requested from the cache. The results clearly show that the proposed system can decrease the latency while the total throughput delivered by the network has highly improved when SeNodeBs are deployed in the system, rising throughput will incur the rise of overall capacity which leads to better services being provided to the users or more users to join and benefit from the network. Handover improvement is also considered in this work, with the help of two Artificial Intelligence (AI) entities better handover performance are achieved. Balanced load over the SeNodeBs results in less frequent handover, the proposed load balancer is based on artificial neural network clustering model with self-organizing map as a hidden layer, it’s trained to forecast the network condition and learn to reduce the number of handovers especially for the UEs at the cell edge by performing only necessary ones, and avoid handovers to the Macro cell for the downlink direction. The examined handovers concern the downlinks when routing non live video stored at the small cell’s cache, and a reduction in the frequent handovers was achieved when running the balancer. Keep revolving in the handover orbit, another way to preserve and utilize network resources is by predicting the handovers before they occur, and allocate the required data in the target SeNodeB, the predictor entity in the proposed system architecture combines the features of Radial Basis Function Neural Network and neural network time series tool to create and update prediction list from the system’s collected data and learn to predict the next SeNodeB to associate with. The prediction entity is simulated using MATLAB, and the results shows that the system was able to deliver up to 92% correct predictions for handovers which led to overall throughput improvement of 75%
- …