310 research outputs found

    AI-Driven, Predictive QoS for V2X Communications in 5G and beyond.

    Get PDF
    Με το ξεκίνημα της εποχής της συνδεδεμένης και αυτοματοποιημένης κινητικότητας με δυνατότητα 5G, έχουν προκύψει καινοτόμες υπηρεσίες Vehicle-to-Everything προς ασφαλέστερη και αυτοματοποιημένη οδήγηση. Οι απαιτήσεις που απορρέουν από αυτές τις υπηρεσίες θέτουν πολύ αυστηρές προκλήσεις στο δίκτυο κυρίως όσον αφορά την καθυστέρηση από άκρο σε άκρο και την αξιοπιστία των υπηρεσιών. Ταυτόχρονα, η τεχνητή νοημοσύνη εντός δικτύου που αναδύεται, αποκαλύπτει μια πληθώρα νέων δυνατοτήτων του δικτύου να ενεργεί με προληπτικό τρόπο προς την ικανοποίηση των προαναφερθέντων μεγάλων απαιτήσεων. Αυτή η διατριβή παρουσιάζει το PreQoS, έναν προγνωστικό μηχανισμό Ποιότητας Υπηρεσιών που εστιάζει στις υπηρεσίες Οχήματος-προς-Όλα (V2X). Το PreQoS είναι σε θέση να προβλέψει έγκαιρα συγκεκριμένες μετρήσεις Ποιότητας Υπηρεσιών, όπως ο ρυθμός δεδομένων uplink and downlink και η καθυστέρηση από άκρο σε άκρο, προκειμένου να προσφέρει το απαιτούμενο χρονικό διάστημα στο δίκτυο για την πιο αποτελεσματική κατανομή των πόρων του. Επιπλέον, η προληπτική διαχείριση αυτών των πόρων επιτρέπει στις αντίστοιχες υπηρεσίες και εφαρμογές του Οχήματος προς Όλα να εκτελούν εκ των προτέρων τυχόν ενδεχόμενες προσαρμογές που σχετίζονται με την Ποιότητα Υπηρεσιών. Η αξιολόγηση του προτεινόμενου μηχανισμού με βάση ένα ρεαλιστικό, προσομοιωμένο, συνδεδεμένο και αυτοματοποιημένο περιβάλλον κινητικότητας αποδεικνύει τη βιωσιμότητα και την εγκυρότητα μιας τέτοιας προσέγγισης.On the eve of 5G-enabled Connected and Automated Mobility, challenging Vehicle-to-Everything services have emerged towards safer and automated driving. The requirements that stem from those services pose very strict challenges to the network primarily with regard to the end-to-end delay and service reliability. At the same time, the in-network Artificial Intelligence that is emerging, reveals a plethora of novel capabilities of the network to act in a proactive manner towards satisfying the aforementioned challenging requirements. This thesis presents PreQoS, a predictive Quality of Service mechanism that focuses on Vehicle-to-Everything services. PreQoS is able to timely predict specific Quality of Service metrics, such as uplink and downlink data rate and end-to-end delay, in order to offer the required time window to the network to allocate more efficiently its resources. On top of that, the proactive management of those resources enables the respective Vehicle-to-Everything services and applications to perform any potential Quality of Service-related required adaptations in advance. The evaluation of the proposed mechanism based on a realistic, simulated, Connected and Automated Mobility environment proves the viability and validity of such an approach

    Scheduling algorithms for next generation cellular networks

    Get PDF
    Next generation wireless and mobile communication systems are rapidly evolving to satisfy the demands of users. Due to spectrum scarcity and time-varying nature of wireless networks, supporting user demand and achieving high performance necessitate the design of efficient scheduling and resource allocation algorithms. Opportunistic scheduling is a key mechanism for such a design, which exploits the time-varying nature of the wireless environment for improving the performance of wireless systems. In this thesis, our aim is to investigate various categories of practical scheduling problems and to design efficient policies with provably optimal or near-optimal performance. An advantage of opportunistic scheduling is that it can effectively be incorporated with new communication technologies to further increase the network performance. We investigate two key technologies in this context. First, motivated by the current under-utilization of wireless spectrum, we characterize optimal scheduling policies for wireless cognitive radio networks by assuming that users always have data to transmit. We consider cooperative schemes in which secondary users share the time slot with primary users in return for cooperation, and our aim is to improve the primary systems performance over the non-cooperative case. By employing Lyapunov Optimization technique, we develop optimal scheduling algorithms which maximize the total expected utility and satisfy the minimum data rate requirements of the primary users. Next, we study scheduling problem with multi-packet transmission. The motivation behind multi-packet transmission comes from the fact that the base station can send more than one packets simultaneously to more than one users. By considering unsaturated queueing systems we aim to stabilize user queues. To this end, we develop a dynamic control algorithm which is able to schedule more than one users in a time slot by employing hierarchical modulation which enables multi-packet transmission. Through Lyapunov Optimization technique, we show that our algorithm is throughput-optimal. We also study the resulting rate region of developed policy and show that it is larger than that of single user scheduling. Despite the advantage of opportunistic scheduling, this mechanism requires that the base station is aware of network conditions such as channel state and queue length information of users. In the second part of this thesis, we turn our attention to the design of scheduling algorithms when complete network information is not available at the scheduler. In this regard, we study three sets of problems where the common objective is to stabilize user queues. Specifically, we first study a cellular downlink network by assuming that channels are identically distributed across time slots and acquiring channel state information of a user consumes a certain fraction of resource which is otherwise used for transmission of data. We develop a joint scheduling and channel probing algorithm which collects channel state information from only those users with su±ciently good channel quality. We also quantify the minimum number of users that must exist to achieve larger rate region than Max-Weight algorithm with complete channel state information. Next, we consider a more practical channel models where channels can be time-correlated (possibly non-stationary) and only a fixed number of channels can be probed. We develop learning based scheduling algorithm which tracks and predicts instantaneous transmission rates of users and makes a joint scheduling and probing decision based on the predicted rates rather than their exact values. We also characterize the achievable rate region of these policies as compared to Max-Weight policy with exact channel state information. Finally, we study a cellular uplink system and develop a fully distributed scheduling algorithm which can perform over general fading channels and does not require explicit control messages passing among the users. When continuous backoff time is allowed, we show that the proposed distributed algorithm can achieve the same performance as that of centralized Max-Weight algorithm in terms of both throughput and delay. When backoff time can take only discrete values, we show that our algorithm can perform well at the expense of low number of mini-slots for collision resolution

    Intelligence in 5G networks

    Get PDF
    Over the past decade, Artificial Intelligence (AI) has become an important part of our daily lives; however, its application to communication networks has been partial and unsystematic, with uncoordinated efforts that often conflict with each other. Providing a framework to integrate the existing studies and to actually build an intelligent network is a top research priority. In fact, one of the objectives of 5G is to manage all communications under a single overarching paradigm, and the staggering complexity of this task is beyond the scope of human-designed algorithms and control systems. This thesis presents an overview of all the necessary components to integrate intelligence in this complex environment, with a user-centric perspective: network optimization should always have the end goal of improving the experience of the user. Each step is described with the aid of one or more case studies, involving various network functions and elements. Starting from perception and prediction of the surrounding environment, the first core requirements of an intelligent system, this work gradually builds its way up to showing examples of fully autonomous network agents which learn from experience without any human intervention or pre-defined behavior, discussing the possible application of each aspect of intelligence in future networks

    Predicting thunderstorm evolution using ground-based lightning detection networks

    Get PDF
    Lightning measurements acquired principally by a ground-based network of magnetic direction finders are used to diagnose and predict the existence, temporal evolution, and decay of thunderstorms over a wide range of space and time scales extending over four orders of magnitude. The non-linear growth and decay of thunderstorms and their accompanying cloud-to-ground lightning activity is described by the three parameter logistic growth model. The growth rate is shown to be a function of the storm size and duration, and the limiting value of the total lightning activity is related to the available energy in the environment. A new technique is described for removing systematic bearing errors from direction finder data where radar echoes are used to constrain site error correction and optimization (best point estimate) algorithms. A nearest neighbor pattern recognition algorithm is employed to cluster the discrete lightning discharges into storm cells and the advantages and limitations of different clustering strategies for storm identification and tracking are examined

    SPARC 2016 Salford postgraduate annual research conference book of abstracts

    Get PDF

    Deep learning-based hybrid short-term solar forecast using sky images and meteorological data

    Get PDF
    The global growth of solar power generation is rapid, yet the complex nature of cloud movement introduces significant uncertainty to short-term solar irradiance, posing challenges for intelligent power systems. Accurate short-term solar irradiance and photovoltaic power generation predictions under cloudy skies are critical for sub-hourly electricity markets. Ground-based image (GSI) analysis using convolutional neural network (CNN) algorithms has emerged as a promising method due to advancements in machine vision models based on deep learning networks. In this work, a novel deep network, ”ViT-E,” based on an attention mechanism Transformer architecture for short-term solar irradiance forecasting has been proposed. This innovative model enables cross-modality data parsing by establishing mapping relationships within GSI and between GSI, meteorological data, historical irradiation, clear sky irradiation, and solar angles. The feasibility of the ViT-E network was assessed the Folsom dataset from California, USA. Quantitative analysis showed that the ViT-E network achieved RMSE values of 81.45 W/m2 , 98.68 W/m2 , and 104.91 W/m2 for 2, 6, and 10-minute forecasts, respectively, outperforming the persistence model by 4.87%, 16.06%, and 19.09% and displaying performance comparable to CNN-based models. Qualitative analysis revealed that the ViT-E network successfully predicted 20.21%, 33.26%, and 36.87% of solar slope events at 2, 6, and 10 minutes in advance, respectively, significantly surpassing the persistence model and currently prevalent CNN-based model by 9.43%, 3.91%, and -0.55% for 2, 6, and 10-minute forecasts, respectively. Transfer learning experiments were conducted to test the ViT-E model’s generalisation under different climatic conditions and its performance on smaller datasets. We discovered that the weights learned from the three-year Folsom dataset in the United States could be transferred to a half-year local dataset in Nottingham, UK. Training with a dataset one-fifth the size of the original dataset achieved baseline accuracy standards and reduced training time by 80.2%. Additionally, using a dataset equivalent to only 4.5% of the original size yielded a model with less than 2% accuracy below the baseline. These findings validated the generalisation and robustness of the model’s trained weights. Finally, the ViT-E model architecture and hyperparameters were optimised and searched. Our investigation revealed that directly applying migrated deep vision models leads to redundancy in solar forecasting. We identified the best hyperparameters for ViT-E through manual hyperparameter space exploration. As a result, the model’s computational efficiency improved by 60%, and prediction performance increased by 2.7%
    corecore