481 research outputs found

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Multi-Layer Latency Aware Workload Assignment of E-Transport IoT Applications in Mobile Sensors Cloudlet Cloud Networks

    Get PDF
    These days, with the emerging developments in wireless communication technologies, such as 6G and 5G and the Internet of Things (IoT) sensors, the usage of E-Transport applications has been increasing progressively. These applications are E-Bus, E-Taxi, self-autonomous car, ETrain and E-Ambulance, and latency-sensitive workloads executed in the distributed cloud network. Nonetheless, many delays present in cloudlet-based cloud networks, such as communication delay, round-trip delay and migration during the workload in the cloudlet-based cloud network. However, the distributed execution of workloads at different computing nodes during the assignment is a challenging task. This paper proposes a novel Multi-layer Latency (e.g., communication delay, roundtrip delay and migration delay) Aware Workload Assignment Strategy (MLAWAS) to allocate the workload of E-Transport applications into optimal computing nodes. MLAWAS consists of different components, such as the Q-Learning aware assignment and the Iterative method, which distribute workload in a dynamic environment where runtime changes of overloading and overheating remain controlled. The migration of workload and VM migration are also part of MLAWAS. The goal is to minimize the average response time of applications. Simulation results demonstrate that MLAWAS earns the minimum average response time as compared with the two other existing strategies.publishedVersio

    Collaborative Vehicular Edge Computing Networks: Architecture Design and Research Challenges

    Get PDF
    The emergence of augmented reality (AR), autonomous driving and other new applications have greatly enriched the functionality of the vehicular networks. However, these applications usually require complex calculations and large amounts of storage, which puts tremendous pressure on traditional vehicular networks. Mobile edge computing (MEC) is proposed as a prospective technique to extend computing and storage resources to the edge of the network. Combined with MEC, the computing and storage capabilities of the vehicular network can be further enhanced. Therefore, in this paper, we explore the novel collaborative vehicular edge computing network (CVECN) architecture. We first review the work related to MEC and vehicular networks. Then we discuss the design principles of CVECN. Based on the principles, we present the detailed CVECN architecture, and introduce the corresponding functional modules, communication process, as well as the installation and deployment ideas. Furthermore, the promising technical challenges, including collaborative coalition formation, collaborative task offloading and mobility management, are presented. And some potential research issues for future research are highlighted. Finally, simulation results are verified that the proposed CVECN can significantly improve network performance

    Towards Mobile Edge Computing: Taxonomy, Challenges, Applications and Future Realms

    Get PDF
    The realm of cloud computing has revolutionized access to cloud resources and their utilization and applications over the Internet. However, deploying cloud computing for delay critical applications and reducing the delay in access to the resources are challenging. The Mobile Edge Computing (MEC) paradigm is one of the effective solutions, which brings the cloud computing services to the proximity of the edge network and leverages the available resources. This paper presents a survey of the latest and state-of-the-art algorithms, techniques, and concepts of MEC. The proposed work is unique in that the most novel algorithms are considered, which are not considered by the existing surveys. Moreover, the chosen novel literature of the existing researchers is classified in terms of performance metrics by describing the realms of promising performance and the regions where the margin of improvement exists for future investigation for the future researchers. This also eases the choice of a particular algorithm for a particular application. As compared to the existing surveys, the bibliometric overview is provided, which is further helpful for the researchers, engineers, and scientists for a thorough insight, application selection, and future consideration for improvement. In addition, applications related to the MEC platform are presented. Open research challenges, future directions, and lessons learned in area of the MEC are provided for further future investigation

    EMM: Energy-Aware Mobility Management for Mobile Edge Computing in Ultra Dense Networks

    Full text link
    Merging mobile edge computing (MEC) functionality with the dense deployment of base stations (BSs) provides enormous benefits such as a real proximity, low latency access to computing resources. However, the envisioned integration creates many new challenges, among which mobility management (MM) is a critical one. Simply applying existing radio access oriented MM schemes leads to poor performance mainly due to the co-provisioning of radio access and computing services of the MEC-enabled BSs. In this paper, we develop a novel user-centric energy-aware mobility management (EMM) scheme, in order to optimize the delay due to both radio access and computation, under the long-term energy consumption constraint of the user. Based on Lyapunov optimization and multi-armed bandit theories, EMM works in an online fashion without future system state information, and effectively handles the imperfect system state information. Theoretical analysis explicitly takes radio handover and computation migration cost into consideration and proves a bounded deviation on both the delay performance and energy consumption compared to the oracle solution with exact and complete future system information. The proposed algorithm also effectively handles the scenario in which candidate BSs randomly switch on/off during the offloading process of a task. Simulations show that the proposed algorithms can achieve close-to-optimal delay performance while satisfying the user energy consumption constraint.Comment: 14 pages, 6 figures, an extended version of the paper submitted to IEEE JSA
    corecore