29 research outputs found

    A Survey on Cellular-connected UAVs: Design Challenges, Enabling 5G/B5G Innovations, and Experimental Advancements

    Full text link
    As an emerging field of aerial robotics, Unmanned Aerial Vehicles (UAVs) have gained significant research interest within the wireless networking research community. As soon as national legislations allow UAVs to fly autonomously, we will see swarms of UAV populating the sky of our smart cities to accomplish different missions: parcel delivery, infrastructure monitoring, event filming, surveillance, tracking, etc. The UAV ecosystem can benefit from existing 5G/B5G cellular networks, which can be exploited in different ways to enhance UAV communications. Because of the inherent characteristics of UAV pertaining to flexible mobility in 3D space, autonomous operation and intelligent placement, these smart devices cater to wide range of wireless applications and use cases. This work aims at presenting an in-depth exploration of integration synergies between 5G/B5G cellular systems and UAV technology, where the UAV is integrated as a new aerial User Equipment (UE) to existing cellular networks. In this integration, the UAVs perform the role of flying users within cellular coverage, thus they are termed as cellular-connected UAVs (a.k.a. UAV-UE, drone-UE, 5G-connected drone, or aerial user). The main focus of this work is to present an extensive study of integration challenges along with key 5G/B5G technological innovations and ongoing efforts in design prototyping and field trials corroborating cellular-connected UAVs. This study highlights recent progress updates with respect to 3GPP standardization and emphasizes socio-economic concerns that must be accounted before successful adoption of this promising technology. Various open problems paving the path to future research opportunities are also discussed.Comment: 30 pages, 18 figures, 9 tables, 102 references, journal submissio

    Cache-Aware Adaptive Video Streaming in 5G networks

    Get PDF
    Η τεχνολογία προσαρμοστικής ροής video μέσω HTTP έχει επικρατήσει ως ο κυρίαρχος τρόπος μετάδοσης video στο Internet. Η τεχνολογία αυτή βασίζεται στη λήψη μικρών διαδοχικών τμημάτων video από έναν server. Μία πρόκληση που όμως δεν έχει διερευνηθεί επαρκώς είναι η λήψη τμημάτων video από περισσότερους από έναν servers, με τρόπο που να εξυπηρετεί τόσο τις ανάγκες του δικτύου όσο και τη βελτίωση της Ποιότητας Εμπειρίας του χρήστη (Quality of Experience, QoE). Η συγκεκριμένη διπλωματική εργασία θα διερευνήσει αυτό το πρόβλημα, προσομοιώνοντας ένα δίκτυο με πολλαπλούς video servers και διάφορους video clients. Στη συνέχεια, θα υλοποιήσει τόσο την δυνατότητα επικοινωνίας peer-to-many στα πλαίσια της προσαρμοστικής ροής video όσο και τον αλγόριθμο επιλογής video server. Όλα αυτά θα διερευνηθούν στο περιβάλλον του Mininet, που είναι ένας δικτυακός εξομοιωτής, για να προσομοιωθεί η τεχνολογία DASH με τη βοήθεια των κόμβων του δικτύου του εξομοιωτή. Αρχικά, το βίντεο χωρίστηκε σε μικρά κομμάτια με τη βοήθεια του εργαλείου ffmpeg και στη συνέχεια, υλοποιήθηκαν πειράματα που ένας πελάτης ζητούσε το βίντεο από έναν server προσωρινής αποθήκευσης (cache server). Αν το συγκεκριμένο τμήμα του βίντεο δεν υπήρχε εκεί, τότε στελνόταν αίτημα από τον server προσωρινής αποθήκευσης σε έναν διακομιστή που περιείχε όλα τα τμήματα του βίντεο (main server). Στα πειράματα αυτά εξετάστηκε και η προστιθέμενη δικτυακή κίνηση, με τελικό συμπέρασμα ότι το περιβάλλον του Mininet προκαλεί αναπόφευκτους περιορισμούς στη περίπτωση της δικτυακής κίνησης, καθώς παρατηρήσαμε πως το κανάλι του server βάσης δεδομένων παρέμενε ανενεργό καθ’ όλη τη διάρκεια αιτημάτων από τον server προσωρινής αποθήκευσης, με αποτέλεσμα να δημιουργούνται συνθήκες μη-ρεαλιστικού δικτύου. Γι’ αυτόν τον λόγο, προβήκαμε στην υλοποίηση μιας νέας προσέγγισης, εξαλείφοντας το Mininet περιβάλλον και δουλεύοντας πάνω σε νέες τεχνικές προσθήκης δικτυακής κίνησης και τροποποιώντας την επικοινωνία των διακομιστών μεταξύ τους. Με αυτόν τον τρόπο, καταφέραμε να δείξουμε σαφέστερα τους περιορισμούς της προηγούμενης προσέγγισης αλλά και να συμπεράνουμε ότι η ύπαρξη servers προσωρινής αποθήκευσης είναι ένα χρήσιμο εργαλείο υπό όρους αύξησης της ποιότητας εμπειρίας ενός χρήστη. Η γενική τάση που παρατηρήθηκε ήταν ότι με την αύξηση του διαθέσιμου χώρου αποθήκευσης, η ποιότητα αναπαραγωγής του βίντεο ανέβαινε σε κάποιο βαθμό. Ταυτόχρονα όμως, το ποσοστό βελτίωσης αυτό, είναι άρρηκτα δεμένο με τον αλγόριθμο επιλογής κομματιών βίντεο που χρησιμοποιείται. Για ακόμα καλύτερα αποτελέσματα λοιπόν, θεωρείται αναγκαία η εύρεση της χρυσής τομής μεταξύ χωρητικότητας του χώρου προσωρινής αποθήκευσης και αλγορίθμου επιλογής κομματιών. Στην παρούσα διπλωματική παρουσιάζονται τα εξής κεφάλαια: Στο κεφάλαιο 1 αναφέρεται η ιστορική αναδρομή της τεχνολογίας των δικτύων. Στο κεφάλαιο 2 αναλύεται η τεχνολογία προσαρμοστικής ροής βίντεο μέσω HTTP. Στο κεφάλαιο 3 αναλύονται οι διαφορετικές τεχνικές προσωρινής αποθήκευσης. Στο κεφάλαιο 4 παρουσιάζεται η έννοια της Ποιότητας Εμπειρίας του χρήστη και η συσχέτισή της με πολλούς άλλους παράγοντες. Το κεφάλαιο 5 περιγράφεται αναλυτικά η διαδικασία στησίματος του περιβάλλοντος και τα διάφορα απαραίτητα εργαλεία για την υλοποίησή μας. Το κεφάλαιο 6 αναφέρει τα πειράματα μέσω Mininet, την τοπολογία και όλο το στήσιμο, καθώς και τους λόγους που μας οδήγησαν στην πορεία μιας διαφορετικής προσέγγισης. Στο κεφάλαιο 7 προτείνεται η διαφορετική προσέγγιση και παρουσιάζεται η μεθοδολογία και οι μετρικές. Επίσης, αναλύονται διαγράμματα που εξάχθηκαν από την ανάλυση τω μετρικών. Τέλος, το κεφάλαιο 8 αφορά τα συμπεράσματα και θέματα μελλοντικής έρευνας για βελτίωση της Ποιότητας Εμπειρίας του χρήστη περαιτέρω.Dynamic Adaptive Streaming over HTTP (DASH) has prevailed as the dominant way of video transmission over the Internet. This technology is based on receiving small sequential video segments from a server. However, one challenge that has not been adequately examined, is the obtainment of video segments from more than one server, in a way that serves both the needs of the network and the improvement of the Quality of Experience (QoE). This thesis will investigate this problem by simulating a network with multiple video servers and a video client. It will then implement both the peer-to-many communication in the context of adaptive video streaming and the video server caching algorithm based on proposed criteria that will improve the status of the network and/or the user. All of this will be explored in the environment of Mininet, which is a network emulator, in order to simulate the DASH technology with the help of the emulator network nodes. Initially, the video was split into small segments using the ffmpeg tool, and then experiments were conducted in which a client requested the video from a cache server. If the segment could not be found in the cache server, then a request was sent from the cache server to a server that contained all segments of the video (main server). In these experiments, the added traffic was also examined, by concluded to the fact that the Mininet environment causes unavoidable limitations in the case of the traffic. What we observed was that the main server channel remained inactive throughout the requests of the cache server, resulting in unrealistic network conditions. For this reason, we have explored a new approach, eliminating the Mininet environment and working on new techniques for adding web traffic and modifying the communication of the servers, regarding the requests they receive. In this way, we were able to clearly show the limitations of the previous approach but also to conclude that the existence of caching servers is a useful tool in terms of increasing the quality of experience. The general tendency was that, as the available buffer size increased, the video playback quality increased to some extent. However, at the same time this improvement is linked to the random selection algorithm. For even better results, it is considered necessary to find an appropriate caching selection algorithm in order to take full advantage of the caching technology. The following chapters presented in this thesis are: Chapter 1 mentions the historical background of the networks. Chapter 2 analyzes the Dynamic Adaptive Streaming over HTTP. Chapter 3 analyzes the caching techniques. Chapter 4 presents the concept of Quality of Experience and its correlation with many other factors. Chapter 5 describes in detail the process of setting up the environment and the various necessary tools for our implementation. Chapter 6 refers to the Mininet experiments, the topology, and the set-up, as well as the reasons that led us to a different approach. Chapter 7 proposes the different approach and presents the methodology and the metrics. Also, diagrams extracted from the analysis of the metrics are analyzed in Chapter 7. Finally, Chapter 8 summarizes the conclusions and issues of future research to improve the Quality of Experience even further

    A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art

    Get PDF
    Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research

    Bottleneck Identification in Cloudified Mobile Networks Based on Distributed Telemetry

    Get PDF
    Cloudified mobile networks are expected to deliver a multitude of services with reduced capital and operating expenses. A characteristic example is 5G networks serving several slices in parallel. Such mobile networks, therefore, need to ensure that the SLAs of customised end-to-end sliced services are met. This requires monitoring the resource usage and characteristics of data flows at the virtualised network core, as well as tracking the performance of the radio interfaces and UEs. A centralised monitoring architecture can not scale to support millions of UEs though. This paper, proposes a 2-stage distributed telemetry framework in which UEs act as early warning sensors. After UEs flag an anomaly, a ML model is activated, at network controller, to attribute the cause of the anomaly. The framework achieves 85% F1-score in detecting anomalies caused by different bottlenecks, and an overall 89% F1-score in attributing these bottlenecks. This accuracy of our distributed framework is similar to that of a centralised monitoring system, but with no overhead of transmitting UE-based telemetry data to the centralised controller. The study also finds that passive in-band network telemetry has the potential to replace active monitoring and can further reduce the overhead of a network monitoring system

    Mobile Oriented Future Internet (MOFI)

    Get PDF
    This Special Issue consists of seven papers that discuss how to enhance mobility management and its associated performance in the mobile-oriented future Internet (MOFI) environment. The first two papers deal with the architectural design and experimentation of mobility management schemes, in which new schemes are proposed and real-world testbed experimentations are performed. The subsequent three papers focus on the use of software-defined networks (SDN) for effective service provisioning in the MOFI environment, together with real-world practices and testbed experimentations. The remaining two papers discuss the network engineering issues in newly emerging mobile networks, such as flying ad-hoc networks (FANET) and connected vehicular networks

    Resource Management From Single-domain 5G to End-to-End 6G Network Slicing:A Survey

    Get PDF
    Network Slicing (NS) is one of the pillars of the fifth/sixth generation (5G/6G) of mobile networks. It provides the means for Mobile Network Operators (MNOs) to leverage physical infrastructure across different technological domains to support different applications. This survey analyzes the progress made on NS resource management across these domains, with a focus on the interdependence between domains and unique issues that arise in cross-domain and End-to-End (E2E) settings. Based on a generic problem formulation, NS resource management functionalities (e.g., resource allocation and orchestration) are examined across domains, revealing their limits when applied separately per domain. The appropriateness of different problem-solving methodologies is critically analyzed, and practical insights are provided, explaining how resource management should be rethought in cross-domain and E2E contexts. Furthermore, the latest advancements are reported through a detailed analysis of the most relevant research projects and experimental testbeds. Finally, the core issues facing NS resource management are dissected, and the most pertinent research directions are identified, providing practical guidelines for new researchers.<br/

    NOVEL USER-CENTRIC ARCHITECTURES FOR FUTURE GENERATION CELLULAR NETWORKS: DESIGN, ANALYSIS AND PERFORMANCE OPTIMIZATION

    Get PDF
    Ambitious targets for aggregate throughput, energy efficiency (EE) and ubiquitous user experience are propelling the advent of ultra-dense networks. Inter-cell interference and high energy consumption in an ultra-dense network are the prime hindering factors in pursuit of these goals. To address this challenge, we investigate the idea of transforming network design from being base station-centric to user-centric. To this end, we develop mathematical framework and analyze multiple variants of the user-centric networks, with the help of advanced scientific tools such as stochastic geometry, game theory, optimization theory and deep neural networks. We first present a user-centric radio access network (RAN) design and then propose novel base station association mechanisms by forming virtual dedicated cells around users scheduled for downlink. The design question that arises is what should the ideal size of the dedicated regions around scheduled users be? To answer this question, we follow a stochastic geometry based approach to quantify the area spectral efficiency (ASE) and energy efficiency (EE) of a user-centric Cloud RAN architecture. Observing that the two efficiency metrics have conflicting optimal user-centric cell sizes, we propose a game theoretic self-organizing network (GT-SON) framework that can orchestrate the network between ASE and EE focused operational modes in real-time in response to changes in network conditions and the operator's revenue model, to achieve a Pareto optimal solution. The designed model is shown to outperform base-station centric design in terms of both ASE and EE in dense deployment scenarios. Taking this user-centric approach as a baseline, we improve the ASE and EE performance by introducing flexibility in the dimensions of the user-centric regions as a function of data requirement for each device. So instead of optimizing the network-wide ASE or EE, each user device competes for a user-centric region based on its data requirements. This competition is modeled via an evolutionary game and a Vickrey-Clarke-Groves auction. The data requirement based flexibility in the user-centric RAN architecture not only improves the ASE and EE, but also reduces the scheduling wait time per user. Offloading dense user hotspots to low range mmWave cells promises to meet the enhance mobile broadband requirement of 5G and beyond. To investigate how the three key enablers; i.e. user-centric virtual cell design, ultra-dense deployments and mmWave communication; are integrated in a multi-tier Stienen geometry based user-centric architecture. Taking into account the characteristics of mmWave propagation channel such as blockage and fading, we develop a statistical framework for deriving the coverage probability of an arbitrary user equipment scheduled within the proposed architecture. A key advantage observed through this architecture is significant reduction in the scheduling latency as compared to the baseline user-centric model. Furthermore, the interplay between certain system design parameters was found to orchestrate the ASE-EE tradeoff within the proposed network design. We extend this work by framing a stochastic optimization problem over the design parameters for a Pareto optimal ASE-EE tradeoff with random placements of mobile users, macro base stations and mmWave cells within the network. To solve this optimization problem, we follow a deep learning approach to estimate optimal design parameters in real-time complexity. Our results show that if the deep learning model is trained with sufficient data and tuned appropriately, it yields near-optimal performance while eliminating the issue of long processing times needed for system-wide optimization. The contributions of this dissertation have the potential to cause a paradigm shift from the reactive cell-centric network design to an agile user-centric design that enables real-time optimization capabilities, ubiquitous user experience, higher system capacity and improved network-wide energy efficiency

    Artificial intelligence empowered virtual network function deployment and service function chaining for next-generation networks

    Get PDF
    The entire Internet of Things (IoT) ecosystem is directing towards a high volume of diverse applications. From smart healthcare to smart cities, every ubiquitous digital sector provisions automation for an immersive experience. Augmented/Virtual reality, remote surgery, and autonomous driving expect high data rates and ultra-low latency. The Network Function Virtualization (NFV) based IoT infrastructure of decoupling software services from proprietary devices has been extremely popular due to cutting back significant deployment and maintenance expenditure in the telecommunication industry. Another substantially highlighted technological trend for delaysensitive IoT applications has emerged as multi-access edge computing (MEC). MEC brings NFV to the network edge (in closer proximity to users) for faster computation. Among the massive pool of IoT services in NFV context, the urgency for efficient edge service orchestration is constantly growing. The emerging challenges are addressed as collaborative optimization of resource utilities and ensuring Quality-ofService (QoS) with prompt orchestration in dynamic, congested, and resource-hungry IoT networks. Traditional mathematical programming models are NP-hard, hence inappropriate for time-sensitive IoT environments. In this thesis, we promote the need to go beyond the realms and leverage artificial intelligence (AI) based decision-makers for “smart” service management. We offer different methods of integrating supervised and reinforcement learning techniques to support future-generation wireless network optimization problems. Due to the combinatorial explosion of some service orchestration problems, supervised learning is more superior to reinforcement learning performance-wise. Unfortunately, open access and standardized datasets for this research area are still in their infancy. Thus, we utilize the optimal results retrieved by Integer Linear Programming (ILP) for building labeled datasets to train supervised models (e.g., artificial neural networks, convolutional neural networks). Furthermore, we find that ensemble models are better than complex single networks for control layer intelligent service orchestration. Contrarily, we employ Deep Q-learning (DQL) for heavily constrained service function chaining optimization. We carefully address key performance indicators (e.g., optimality gap, service time, relocation and communication costs, resource utilization, scalability intelligence) to evaluate the viability of prospective orchestration schemes. We envision that AI-enabled network management can be regarded as a pioneering tread to scale down massive IoT resource fabrication costs, upgrade profit margin for providers, and sustain QoS mutuall

    Enabling Technology in Optical Fiber Communications: From Device, System to Networking

    Get PDF
    This book explores the enabling technology in optical fiber communications. It focuses on the state-of-the-art advances from fundamental theories, devices, and subsystems to networking applications as well as future perspectives of optical fiber communications. The topics cover include integrated photonics, fiber optics, fiber and free-space optical communications, and optical networking
    corecore