3,180 research outputs found

    Enabling GPU Support for the COMPSs-Mobile Framework

    Get PDF
    Using the GPUs embedded in mobile devices allows for increasing the performance of the applications running on them while reducing the energy consumption of their execution. This article presents a task-based solution for adaptative, collaborative heterogeneous computing on mobile cloud environments. To implement our proposal, we extend the COMPSs-Mobile framework – an implementation of the COMPSs programming model for building mobile applications that offload part of the computation to the Cloud – to support offloading computation to GPUs through OpenCL. To evaluate our solution, we subject the prototype to three benchmark applications representing different application patterns.This work is partially supported by the Joint-Laboratory on Extreme Scale Computing (JLESC), by the European Union through the Horizon 2020 research and innovation programme under contract 687584 (TANGO Project), by the Spanish Goverment (TIN2015-65316-P, BES-2013-067167, EEBB-2016-11272, SEV-2011-00067) and the Generalitat de Catalunya (2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    AdaMEC: Towards a Context-Adaptive and Dynamically-Combinable DNN Deployment Framework for Mobile Edge Computing

    Full text link
    With the rapid development of deep learning, recent research on intelligent and interactive mobile applications (e.g., health monitoring, speech recognition) has attracted extensive attention. And these applications necessitate the mobile edge computing scheme, i.e., offloading partial computation from mobile devices to edge devices for inference acceleration and transmission load reduction. The current practices have relied on collaborative DNN partition and offloading to satisfy the predefined latency requirements, which is intractable to adapt to the dynamic deployment context at runtime. AdaMEC, a context-adaptive and dynamically-combinable DNN deployment framework is proposed to meet these requirements for mobile edge computing, which consists of three novel techniques. First, once-for-all DNN pre-partition divides DNN at the primitive operator level and stores partitioned modules into executable files, defined as pre-partitioned DNN atoms. Second, context-adaptive DNN atom combination and offloading introduces a graph-based decision algorithm to quickly search the suitable combination of atoms and adaptively make the offloading plan under dynamic deployment contexts. Third, runtime latency predictor provides timely latency feedback for DNN deployment considering both DNN configurations and dynamic contexts. Extensive experiments demonstrate that AdaMEC outperforms state-of-the-art baselines in terms of latency reduction by up to 62.14% and average memory saving by 55.21%

    Mobile Edge Computing: From Task Load Balancing to Real-World Mobile Sensing Applications

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.With the rapid development of mobile computing technologies and the Internet of Things, there has been an increasing rise of capable and affordable edge devices that can provide in-proximity computing services for mobile users. Moreover, a massive amount of mobile edge computing (MEC) systems have been developed to enhance various aspects of people's daily life, including big mobile data, healthcare, intelligent transportation, connected vehicles, smart building control, indoor localization, and many others. Although MEC systems can provide mobile users with swift computing services and conserve devices' energy by processing their tasks, we confront significant research challenges in several perspectives, including resource management, task scheduling, service placement, application development, etc. For instance, computation offloading in MEC would significantly benefit mobile users and bring new challenges for service providers. Unbalance and inefficiency are the two challenging issues when making decisions on computation offloading among MEC servers. On the other hand, it is unprecedented to design and implement novel and practical applications for edge-assisted mobile computing and mobile sensing. The power of mobile edge computing has not been fully unleashed yet from theoretical and practical perspectives. In this thesis, to address the above challenges from both theoretical and practical perspectives, we present four research studies within the scope of MEC, including load balancing of computation task loading, fairness in workload scheduling, edge-assisted wireless sensing, and cross-domain learning for real-world edge sensing. The thesis consists of two major parts as follows. In the first part of this thesis, we investigate load balancing issues of computation offloading in MEC. First, we present a novel collaborative computation offloading mechanism for balanced mobile cloudlet networks. Then, a fairness-oriented task offloading scheme for IoT applications of MEC is further devised. The proposed computation offloading mechanisms incorporate algorithmic theories with the random mobility and opportunistic encounters of edge servers, thereby processing computation offloading for load balancing in a distributed manner. Through rigorous theoretical analyses and extensive simulations with real-world trace datasets, the proposed methods have demonstrated desirable results of significantly balanced computation offloading, showing great potential to be applied in practice. In the second part of this thesis, beyond theoretical perspectives, we further investigate two novel implementations with mobile edge computing, including edge-assisted wireless crowdsensing for outdoor RSS maps, and urban traffic prediction with cross-domain learning. We implement our ideas with the iMap system and the BuildSenSys system, and further demonstrate demos with real-world datasets to show the effectiveness of proposed applications. We believe that the above algorithms and applications hold great promise for future technological advancement in mobile edge computing

    GR-314 Reinforcement Learning based Offloading Scheme Computation to Optimize Latency-Energy in Collaborative Cloud Networks

    Get PDF
    Growing technologies like virtualization and artificial intelligence have become more popular on mobile devices. But lack of resources faced for processing these applications is still a major hurdle. Collaborative edge and cloud computing are one of the solutions to this problem. Remote servers have enough resources to support computation-heavy tasks and compute the results faster. But transmission time and energy are involved while offloading the computation to remote servers such as cloud and edge devices. There is a need to find an optimal offloading ratio for cloud as well as edge servers such that entire computation on remote as well as local can be achieved minimum energy consumption as well as minimum delay. We have proposed a multi-period deep deterministic policy gradient (MP-DDPG) algorithm to find an optimal offloading policy by partitioning the task and offloading it to the collaborative cloud and edge network to reduce energy consumption. Our results show that MP-DDPG achieves the minimum latency and energy consumption in the collaborative cloud network. We have compared our results with the existing DDPG-based approach and achieved about 65% speedup in terms of latency. Also, we observed energy consumption reduces with an increase in the number of edge servers

    Socially Trusted Collaborative Edge Computing in Ultra Dense Networks

    Full text link
    Small cell base stations (SBSs) endowed with cloud-like computing capabilities are considered as a key enabler of edge computing (EC), which provides ultra-low latency and location-awareness for a variety of emerging mobile applications and the Internet of Things. However, due to the limited computation resources of an individual SBS, providing computation services of high quality to its users faces significant challenges when it is overloaded with an excessive amount of computation workload. In this paper, we propose collaborative edge computing among SBSs by forming SBS coalitions to share computation resources with each other, thereby accommodating more computation workload in the edge system and reducing reliance on the remote cloud. A novel SBS coalition formation algorithm is developed based on the coalitional game theory to cope with various new challenges in small-cell-based edge systems, including the co-provisioning of radio access and computing services, cooperation incentives, and potential security risks. To address these challenges, the proposed method (1) allows collaboration at both the user-SBS association stage and the SBS peer offloading stage by exploiting the ultra dense deployment of SBSs, (2) develops a payment-based incentive mechanism that implements proportionally fair utility division to form stable SBS coalitions, and (3) builds a social trust network for managing security risks among SBSs due to collaboration. Systematic simulations in practical scenarios are carried out to evaluate the efficacy and performance of the proposed method, which shows that tremendous edge computing performance improvement can be achieved.Comment: arXiv admin note: text overlap with arXiv:1010.4501 by other author
    • …
    corecore