30,878 research outputs found

    Multi-Layer Cloud-RAN With Cooperative Resource Allocations for Low-Latency Computing and Communication Services

    Get PDF
    To improve low-latency computing and communication services, a new type of mobile edge computing architecture named multi-layer cloud radio access network (Multi-layer CRAN) is designed in this paper. In Multi-layer CRAN, a high-level edge cloud is deployed next to base band unit pool to handle the computing tasks of user equipment (UE) in centralized way. Meanwhile, a low-level edge cloud is deployed in each remote radio head (RRH) to locally handle UEs' computing tasks in a distributed way. Based upon Multi-layer CRAN, a cooperative communication and computation resource allocation (3C-RA) algorithm is further designed for lower service latency and energy cost, and higher network throughput in this paper. 3C-RA utilizes a distributed RRH cell coloring algorithm to enable each RRH to work out the resource allocation in an efficient and distributed way. 3C-RA employs a proportional fairness-based approach to allocate communication and computation resource in each RRH cell. A series of simulations on Multi-layer CRAN with 3C-RA were carried out. The simulation results validate that Multi-layer CRAN is more capable of providing low-latency computing and communication services, and 3C-RA enables Multi-layer CRAN to have lower service latency and energy cost and higher network throughput

    Mobile Edge Computing for Future Internet-of-Things

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Integrating sensors, the Internet, and wireless systems, Internet-of-Things (IoT) provides a new paradigm of ubiquitous connectivity and pervasive intelligence. The key enabling technology underlying IoT is mobile edge computing (MEC), which is anticipated to realize and reap the promising benefits of IoT applications by placing various cloud resources, such as computing and storage resources closer to smart devices and objects. Challenges of designing efficient and scalable MEC platforms for future IoT arise from the physical limitations of computing and battery resources of IoT devices, heterogeneity of computing and wireless communication capabilities of IoT networks, large volume of data arrivals and massive number connections, and large-scale data storage and delivery across the edge network. To address these challenges, this thesis proposes four efficient and scalable task offloading and cooperative caching approaches are proposed. Firstly, for the multi-user single-cell MEC scenario, the base station (BS) can only have outdated knowledge of IoT device channel conditions due to the time-varying nature of practical wireless channels. To this end, a hybrid learning approach is proposed to optimize the real-time local processing and predictive computation offloading decisions in a distributed manner. Secondly, for the multi-user multi-cell MEC scenario, an energy-efficient resource management approach is developed based on distributed online learning to tackle the heterogeneity of computing and wireless transmission capabilities of edge servers and IoT devices. The proposed approach optimizes the decisions on task offloading, processing, and result delivery between edge servers and IoT devices to minimize the time-average energy consumption of MEC. Thirdly, for the computing resource allocation under large-scale network, a distributed online collaborative computing approach is proposed based on Lyapunov optimization for data analysis in IoT application to minimize the time-average energy consumption of network. Finally, for the storage resource allocation under large-scale network, a distributed IoT data delivery approach based on online learning is proposed for caching application in mobile applications. A new profitable cooperative region is established for every IoT data request admitted at an edge server, to avoid invalid request dispatching

    Resource Management in Mobile Edge Computing for Compute-intensive Application

    Full text link
    With current and future mobile applications (e.g., healthcare, connected vehicles, and smart grids) becoming increasingly compute-intensive for many mission-critical use cases, the energy and computing capacities of embedded mobile devices are proving to be insufficient to handle all in-device computation. To address the energy and computing shortages of mobile devices, mobile edge computing (MEC) has emerged as a major distributed computing paradigm. Compared to traditional cloud-based computing, MEC integrates network control, distributed computing, and storage to customizable, fast, reliable, and secure edge services that are closer to the user and data sites. However, the diversity of applications and a variety of user specified requirements (viz., latency, scalability, availability, and reliability) add additional complications to the system and application optimization problems in terms of resource management. In this thesis dissertation, we aim to develop customized and intelligent placement and provisioning strategies that are needed to handle edge resource management problems for different challenging use cases: i) Firstly, we propose an energy-efficient framework to address the resource allocation problem of generic compute-intensive applications, such as Directed Acyclic Graph (DAG) based applications. We design partial task offloading and server selection strategies with the purpose of minimizing the transmission cost. Our experiment and simulation results indicate that partial task offloading provides considerable energy savings, especially for resource-constrained edge systems. ii) Secondly, to address the dynamism edge environments, we propose solutions that integrate Dynamic Spectrum Access (DSA) and Cooperative Spectrum Sensing (CSS) with fine-grained task offloading schemes. Similarly, we show the high efficiency of the proposed strategy in capturing dynamic channel states and enforcing intelligent channel sensing and task offloading decisions. iii) Finally, application-specific long-term optimization frameworks are proposed for two representative applications: a) multi-view 3D reconstruction and b) Deep Neural Network (DNN) inference. Here, in order to eliminate redundant and unnecessary reconstruction processing, we introduce key-frame and resolution selection incorporated with task assignment, quality prediction, and pipeline parallelization. The proposed framework is able to provide a flexible balance between reconstruction time and quality satisfaction. As for DNN inference, a joint resource allocation and DNN partitioning framework is proposed. The outcomes of this research seek to benefit the future distributed computing, smart applications, and data-intensive science communities to build effective, efficient, and robust MEC environments

    Joint Task Assignment and Wireless Resource Allocation for Cooperative Mobile-Edge Computing

    Full text link
    This paper studies a multi-user cooperative mobile-edge computing (MEC) system, in which a local mobile user can offload intensive computation tasks to multiple nearby edge devices serving as helpers for remote execution. We focus on the scenario where the local user has a number of independent tasks that can be executed in parallel but cannot be further partitioned. We consider a time division multiple access (TDMA) communication protocol, in which the local user can offload computation tasks to the helpers and download results from them over pre-scheduled time slots. Under this setup, we minimize the local user's computation latency by optimizing the task assignment jointly with the time and power allocations, subject to individual energy constraints at the local user and the helpers. However, the joint task assignment and wireless resource allocation problem is a mixed-integer non-linear program (MINLP) that is hard to solve optimally. To tackle this challenge, we first relax it into a convex problem, and then propose an efficient suboptimal solution based on the optimal solution to the relaxed convex problem. Finally, numerical results show that our proposed joint design significantly reduces the local user's computation latency, as compared against other benchmark schemes that design the task assignment separately from the offloading/downloading resource allocations and local execution.Comment: 6 pages, 4 figures, accepted by IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 201

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa
    corecore