771 research outputs found

    Socially Trusted Collaborative Edge Computing in Ultra Dense Networks

    Full text link
    Small cell base stations (SBSs) endowed with cloud-like computing capabilities are considered as a key enabler of edge computing (EC), which provides ultra-low latency and location-awareness for a variety of emerging mobile applications and the Internet of Things. However, due to the limited computation resources of an individual SBS, providing computation services of high quality to its users faces significant challenges when it is overloaded with an excessive amount of computation workload. In this paper, we propose collaborative edge computing among SBSs by forming SBS coalitions to share computation resources with each other, thereby accommodating more computation workload in the edge system and reducing reliance on the remote cloud. A novel SBS coalition formation algorithm is developed based on the coalitional game theory to cope with various new challenges in small-cell-based edge systems, including the co-provisioning of radio access and computing services, cooperation incentives, and potential security risks. To address these challenges, the proposed method (1) allows collaboration at both the user-SBS association stage and the SBS peer offloading stage by exploiting the ultra dense deployment of SBSs, (2) develops a payment-based incentive mechanism that implements proportionally fair utility division to form stable SBS coalitions, and (3) builds a social trust network for managing security risks among SBSs due to collaboration. Systematic simulations in practical scenarios are carried out to evaluate the efficacy and performance of the proposed method, which shows that tremendous edge computing performance improvement can be achieved.Comment: arXiv admin note: text overlap with arXiv:1010.4501 by other author

    POEM: Pricing Longer for Edge Computing in the Device Cloud

    Full text link
    Multiple access mobile edge computing has been proposed as a promising technology to bring computation services close to end users, by making good use of edge cloud servers. In mobile device clouds (MDC), idle end devices may act as edge servers to offer computation services for busy end devices. Most existing auction based incentive mechanisms in MDC focus on only one round auction without considering the time correlation. Moreover, although existing single round auctions can also be used for multiple times, users should trade with higher bids to get more resources in the cascading rounds of auctions, then their budgets will run out too early to participate in the next auction, leading to auction failures and the whole benefit may suffer. In this paper, we formulate the computation offloading problem as a social welfare optimization problem with given budgets of mobile devices, and consider pricing longer of mobile devices. This problem is a multiple-choice multi-dimensional 0-1 knapsack problem, which is a NP-hard problem. We propose an auction framework named MAFL for long-term benefits that runs a single round resource auction in each round. Extensive simulation results show that the proposed auction mechanism outperforms the single round by about 55.6% on the revenue on average and MAFL outperforms existing double auction by about 68.6% in terms of the revenue.Comment: 8 pages, 1 figure, Accepted by the 18th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Exploiting Non-Causal CPU-State Information for Energy-Efficient Mobile Cooperative Computing

    Full text link
    Scavenging the idling computation resources at the enormous number of mobile devices can provide a powerful platform for local mobile cloud computing. The vision can be realized by peer-to-peer cooperative computing between edge devices, referred to as co-computing. This paper considers a co-computing system where a user offloads computation of input-data to a helper. The helper controls the offloading process for the objective of minimizing the user's energy consumption based on a predicted helper's CPU-idling profile that specifies the amount of available computation resource for co-computing. Consider the scenario that the user has one-shot input-data arrival and the helper buffers offloaded bits. The problem for energy-efficient co-computing is formulated as two sub-problems: the slave problem corresponding to adaptive offloading and the master one to data partitioning. Given a fixed offloaded data size, the adaptive offloading aims at minimizing the energy consumption for offloading by controlling the offloading rate under the deadline and buffer constraints. By deriving the necessary and sufficient conditions for the optimal solution, we characterize the structure of the optimal policies and propose algorithms for computing the policies. Furthermore, we show that the problem of optimal data partitioning for offloading and local computing at the user is convex, admitting a simple solution using the sub-gradient method. Last, the developed design approach for co-computing is extended to the scenario of bursty data arrivals at the user accounting for data causality constraints. Simulation results verify the effectiveness of the proposed algorithms.Comment: Submitted to possible journa
    corecore