4 research outputs found

    Hyperprofile-based Computation Offloading for Mobile Edge Networks

    Full text link
    In recent studies, researchers have developed various computation offloading frameworks for bringing cloud services closer to the user via edge networks. Specifically, an edge device needs to offload computationally intensive tasks because of energy and processing constraints. These constraints present the challenge of identifying which edge nodes should receive tasks to reduce overall resource consumption. We propose a unique solution to this problem which incorporates elements from Knowledge-Defined Networking (KDN) to make intelligent predictions about offloading costs based on historical data. Each server instance can be represented in a multidimensional feature space where each dimension corresponds to a predicted metric. We compute features for a "hyperprofile" and position nodes based on the predicted costs of offloading a particular task. We then perform a k-Nearest Neighbor (kNN) query within the hyperprofile to select nodes for offloading computation. This paper formalizes our hyperprofile-based solution and explores the viability of using machine learning (ML) techniques to predict metrics useful for computation offloading. We also investigate the effects of using different distance metrics for the queries. Our results show various network metrics can be modeled accurately with regression, and there are circumstances where kNN queries using Euclidean distance as opposed to rectilinear distance is more favorable.Comment: 5 pages, NSF REU Site publicatio

    Deep Learning Empowered Task Offloading for Mobile Edge Computing in Urban Informatics

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Led by industrialization of smart cities, numerous interconnected mobile devices, and novel applications have emerged in the urban environment, providing great opportunities to realize industrial automation. In this context, autonomous driving is an attractive issue, which leverages large amounts of sensory information for smart navigation while posing intensive computation demands on resource constrained vehicles. Mobile edge computing (MEC) is a potential solution to alleviate the heavy burden on the devices. However, varying states of multiple edge servers as well as a variety of vehicular offloading modes make efficient task offloading a challenge. To cope with this challenge, we adopt a deep Q-learning approach for designing optimal offloading schemes, jointly considering selection of target server and determination of data transmission mode. Furthermore, we propose an efficient redundant offloading

    Computation offloading for fast and energy-efficient edge computing

    Full text link
    In recent years, the demand for computing power has increased considerably due to the popularity of applications that involve computationally intensive tasks such as machine learning or computer vision. At the same time, users increasingly run such applications on smartphones or wearables, which have limited computational power. The research community has proposed computation offloading to meet the demand for computing power. Resource-constrained devices offload workload to remote resource providers. These providers perform the computations and return the results via the network. Computation offloading has two major benefits. First, it accelerates the execution of computationally intensive tasks and therefore reduces waiting times. Second, it decreases the energy consumption of the offloading device, which is especially attractive for devices that run on battery. After years in which cloud servers were the primary resource providers, computation offloading in edge computing systems is currently gaining popularity. Edge-based systems leverage end-user devices such as smartphones, laptops, or desktop PCs instead of cloud servers as computational resource providers. Computation offloading in such environments leads to lower latencies, better utilization of end-user devices, and lower costs in comparison to traditional cloud computing. In this thesis, we present a computation offloading approach for fast and energy-efficient edge computing. We build upon the Tasklet system – a middleware-based computation offloading system. The Tasklet system allows devices to offload heterogeneous tasks to heterogeneous providers. We address three challenges of computation offloading in the edge. First, many applications are data-intensive, which necessitates a time-consuming transfer of input data ahead of a remote execution. To overcome this challenge, we introduce DataVinci – an approach that proactively places input data on suitable devices to accelerate task execution. DataVinci additionally offers task placement strategies that exploit data locality. Second, modern applications are often user-facing and responsive. They require sub-second execution of computationally intensive tasks to ensure proper user experience. We design the decentralized scheduling approach DecArt for such applications. Third, deciding whether a local or remote execution of an upcoming task will consume less energy is non-trivial. This decision is particularly challenging as task complexity and result data size vary across executions, even if the source code is similar. We introduce the energy-aware scheduling approach Voltaire, which uses machine learning and device-specific energy profiles for making precise offloading decisions. We integrate DataVinci, DecArt, and Voltaire into the Tasklet system and evaluate the benefits in extensive experiments
    corecore