621 research outputs found

    Wireless Communication using Unmanned Aerial Vehicles (UAVs): Optimal Transport Theory for Hover Time Optimization

    Full text link
    In this paper, the effective use of flight-time constrained unmanned aerial vehicles (UAVs) as flying base stations that can provide wireless service to ground users is investigated. In particular, a novel framework for optimizing the performance of such UAV-based wireless systems in terms of the average number of bits (data service) transmitted to users as well as UAVs' hover duration (i.e. flight time) is proposed. In the considered model, UAVs hover over a given geographical area to serve ground users that are distributed within the area based on an arbitrary spatial distribution function. In this case, two practical scenarios are considered. In the first scenario, based on the maximum possible hover times of UAVs, the average data service delivered to the users under a fair resource allocation scheme is maximized by finding the optimal cell partitions associated to the UAVs. Using the mathematical framework of optimal transport theory, a gradient-based algorithm is proposed for optimally partitioning the geographical area based on the users' distribution, hover times, and locations of the UAVs. In the second scenario, given the load requirements of ground users, the minimum average hover time that the UAVs need for completely servicing their ground users is derived. To this end, first, an optimal bandwidth allocation scheme for serving the users is proposed. Then, given this optimal bandwidth allocation, the optimal cell partitions associated with the UAVs are derived by exploiting the optimal transport theory. Results show that our proposed cell partitioning approach leads to a significantly higher fairness among the users compared to the classical weighted Voronoi diagram. In addition, our results reveal an inherent tradeoff between the hover time of UAVs and bandwidth efficiency while serving the ground users

    Wireless Communication using Unmanned Aerial Vehicles (UAVs): Optimal Transport Theory for Hover Time Optimization

    Full text link
    In this paper, the effective use of flight-time constrained unmanned aerial vehicles (UAVs) as flying base stations that can provide wireless service to ground users is investigated. In particular, a novel framework for optimizing the performance of such UAV-based wireless systems in terms of the average number of bits (data service) transmitted to users as well as UAVs' hover duration (i.e. flight time) is proposed. In the considered model, UAVs hover over a given geographical area to serve ground users that are distributed within the area based on an arbitrary spatial distribution function. In this case, two practical scenarios are considered. In the first scenario, based on the maximum possible hover times of UAVs, the average data service delivered to the users under a fair resource allocation scheme is maximized by finding the optimal cell partitions associated to the UAVs. Using the mathematical framework of optimal transport theory, a gradient-based algorithm is proposed for optimally partitioning the geographical area based on the users' distribution, hover times, and locations of the UAVs. In the second scenario, given the load requirements of ground users, the minimum average hover time that the UAVs need for completely servicing their ground users is derived. To this end, first, an optimal bandwidth allocation scheme for serving the users is proposed. Then, given this optimal bandwidth allocation, the optimal cell partitions associated with the UAVs are derived by exploiting the optimal transport theory. Results show that our proposed cell partitioning approach leads to a significantly higher fairness among the users compared to the classical weighted Voronoi diagram. In addition, our results reveal an inherent tradeoff between the hover time of UAVs and bandwidth efficiency while serving the ground users

    A survey on intelligent computation offloading and pricing strategy in UAV-Enabled MEC network: Challenges and research directions

    Get PDF
    The lack of resource constraints for edge servers makes it difficult to simultaneously perform a large number of Mobile Devices’ (MDs) requests. The Mobile Network Operator (MNO) must then select how to delegate MD queries to its Mobile Edge Computing (MEC) server in order to maximize the overall benefit of admitted requests with varying latency needs. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligent (AI) can increase MNO performance because of their flexibility in deployment, high mobility of UAV, and efficiency of AI algorithms. There is a trade-off between the cost incurred by the MD and the profit received by the MNO. Intelligent computing offloading to UAV-enabled MEC, on the other hand, is a promising way to bridge the gap between MDs' limited processing resources, as well as the intelligent algorithms that are utilized for computation offloading in the UAV-MEC network and the high computing demands of upcoming applications. This study looks at some of the research on the benefits of computation offloading process in the UAV-MEC network, as well as the intelligent models that are utilized for computation offloading in the UAV-MEC network. In addition, this article examines several intelligent pricing techniques in different structures in the UAV-MEC network. Finally, this work highlights some important open research issues and future research directions of Artificial Intelligent (AI) in computation offloading and applying intelligent pricing strategies in the UAV-MEC network

    Five Facets of 6G: Research Challenges and Opportunities

    Full text link
    Whilst the fifth-generation (5G) systems are being rolled out across the globe, researchers have turned their attention to the exploration of radical next-generation solutions. At this early evolutionary stage we survey five main research facets of this field, namely {\em Facet~1: next-generation architectures, spectrum and services, Facet~2: next-generation networking, Facet~3: Internet of Things (IoT), Facet~4: wireless positioning and sensing, as well as Facet~5: applications of deep learning in 6G networks.} In this paper, we have provided a critical appraisal of the literature of promising techniques ranging from the associated architectures, networking, applications as well as designs. We have portrayed a plethora of heterogeneous architectures relying on cooperative hybrid networks supported by diverse access and transmission mechanisms. The vulnerabilities of these techniques are also addressed and carefully considered for highlighting the most of promising future research directions. Additionally, we have listed a rich suite of learning-driven optimization techniques. We conclude by observing the evolutionary paradigm-shift that has taken place from pure single-component bandwidth-efficiency, power-efficiency or delay-optimization towards multi-component designs, as exemplified by the twin-component ultra-reliable low-latency mode of the 5G system. We advocate a further evolutionary step towards multi-component Pareto optimization, which requires the exploration of the entire Pareto front of all optiomal solutions, where none of the components of the objective function may be improved without degrading at least one of the other components

    Beyond 5G Networks: Integration of Communication, Computing, Caching, and Control

    Get PDF
    In recent years, the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks. Such challenges can be potentially overcome by integrating communication, computing, caching, and control (i4C) technologies. In this survey, we first give a snapshot of different aspects of the i4C, comprising background, motivation, leading technological enablers, potential applications, and use cases. Next, we describe different models of communication, computing, caching, and control (4C) to lay the foundation of the integration approach. We review current state-of-the-art research efforts related to the i4C, focusing on recent trends of both conventional and artificial intelligence (AI)-based integration approaches. We also highlight the need for intelligence in resources integration. Then, we discuss integration of sensing and communication (ISAC) and classify the integration approaches into various classes. Finally, we propose open challenges and present future research directions for beyond 5G networks, such as 6G.Comment: This article has been accepted for inclusion in a future issue of China Communications Journal in IEEE Xplor

    A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art

    Get PDF
    Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research

    Towards a Decentralized Metaverse: Synchronized Orchestration of Digital Twins and Sub-Metaverses

    Full text link
    Accommodating digital twins (DTs) in the metaverse is essential to achieving digital reality. This need for integrating DTs into the metaverse while operating them at the network edge has increased the demand for a decentralized edge-enabled metaverse. Hence, to consolidate the fusion between real and digital entities, it is necessary to harmonize the interoperability between DTs and the metaverse at the edge. In this paper, a novel decentralized metaverse framework that incorporates DT operations at the wireless edge is presented. In particular, a system of autonomous physical twins (PTs) operating in a massively-sensed zone is replicated as cyber twins (CTs) at the mobile edge computing (MEC) servers. To render the CTs' digital environment, this zone is partitioned and teleported as distributed sub-metaverses to the MEC servers. To guarantee seamless synchronization of the sub-metaverses and their associated CTs with the dynamics of the real world and PTs, respectively, this joint synchronization problem is posed as an optimization problem whose goal is to minimize the average sub-synchronization time between the real and digital worlds, while meeting the DT synchronization intensity requirements. To solve this problem, a novel iterative algorithm for joint sub-metaverse and DT association at the MEC servers is proposed. This algorithm exploits the rigorous framework of optimal transport theory so as to efficiently distribute the sub-metaverses and DTs, while considering the computing and communication resource allocations. Simulation results show that the proposed solution can orchestrate the interplay between DTs and sub-metaverses to achieve a 25.75 % reduction in the sub-synchronization time in comparison to the signal-to-noise ratio-based association scheme
    • …
    corecore