597 research outputs found

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Offloading for Mobile Device Performance Improvement

    Get PDF
    Mobile devices are increasingly becoming part of everyday life. These include smart phones, tablets, wearable devices etc. Due to their mobility aspect, they are always constrained in their size and weight, which limits their resource capacity, e.g. processing power, and battery life. One possible solution for augmentation of such resource-constrained devices is through efficient usage of their surrounding resources, i.e. using some offloading technique. This paper studies how offloading of tasks to the surrounding resources affects on both the performance of task execution as well as the battery life of the mobile device. Two mobile phones and two tablets (from two different manufacturers) are studied in the experiments to find out the impact of the device characteristics. Two computationally demanding tasks, namely image processing and encryption/decryption, are used in these experiments. These results are compared to our earlier results on mobile devices of previous generations. We assumed that the increased computing power of new devices would make offloading obsolete. Our results show gains both in energy saving and in computational performance with these mobile devices. The comparison to our earlier results show that the performance increase of newer mobile device generations has not diminished the benefits of offloading. These results are in line with results presented in literature and they show that the offloading could offer a viable approach for resource augmentation of mobile devices towards edge/fog resources emphasized by the new 5G technology

    Live Prefetching for Mobile Computation Offloading

    Get PDF
    The conventional designs of mobile computation offloading fetch user-specific data to the cloud prior to computing, called offline prefetching. However, this approach can potentially result in excessive fetching of large volumes of data and cause heavy loads on radio-access networks. To solve this problem, the novel technique of live prefetching is proposed in this paper that seamlessly integrates the task-level computation prediction and prefetching within the cloud-computing process of a large program with numerous tasks. The technique avoids excessive fetching but retains the feature of leveraging prediction to reduce the program runtime and mobile transmission energy. By modeling the tasks in an offloaded program as a stochastic sequence, stochastic optimization is applied to design fetching policies to minimize mobile energy consumption under a deadline constraint. The policies enable real-time control of the prefetched-data sizes of candidates for future tasks. For slow fading, the optimal policy is derived and shown to have a threshold-based structure, selecting candidate tasks for prefetching and controlling their prefetched data based on their likelihoods. The result is extended to design close-to-optimal prefetching policies to fast fading channels. Compared with fetching without prediction, live prefetching is shown theoretically to always achieve reduction on mobile energy consumption.Comment: To appear in IEEE Trans. on Wireless Communicatio

    Mobile data and computation offloading in mobile cloud computing

    Get PDF
    Le trafic mobile augmente considérablement en raison de la popularité des appareils mobiles et des applications mobiles. Le déchargement de données mobiles est une solution permettant de réduire la congestion du réseau cellulaire. Le déchargement de calcul mobile peut déplacer les tâches de calcul d'appareils mobiles vers le cloud. Dans cette thèse, nous étudions d'abord le problème du déchargement de données mobiles dans l'architecture du cloud computing mobile. Afin de minimiser les coûts de transmission des données, nous formulons le processus de déchargement des données sous la forme d'un processus de décision de Markov à horizon fini. Nous proposons deux algorithmes de déchargement des données pour un coût minimal. Ensuite, nous considérons un marché sur lequel un opérateur de réseau mobile peut vendre de la bande passante à des utilisateurs mobiles. Nous formulons ce problème sous la forme d'une enchère comportant plusieurs éléments afin de maximiser les bénéfices de l'opérateur de réseau mobile. Nous proposons un algorithme d'optimisation robuste et deux algorithmes itératifs pour résoudre ce problème. Enfin, nous nous concentrons sur les problèmes d'équilibrage de charge afin de minimiser la latence du déchargement des calculs. Nous formulons ce problème comme un jeu de population. Nous proposons deux algorithmes d'équilibrage de la charge de travail basés sur la dynamique évolutive et des protocoles de révision. Les résultats de la simulation montrent l'efficacité et la robustesse des méthodes proposées.Global mobile traffic is increasing dramatically due to the popularity of smart mobile devices and data hungry mobile applications. Mobile data offloading is considered as a promising solution to alleviate congestion in cellular network. Mobile computation offloading can move computation intensive tasks and large data storage from mobile devices to cloud. In this thesis, we first study mobile data offloading problem under the architecture of mobile cloud computing. In order to minimize the overall cost for data delivery, we formulate the data offloading process, as a finite horizon Markov decision process, and we propose two data offloading algorithms to achieve minimal communication cost. Then, we consider a mobile data offloading market where mobile network operator can sell bandwidth to mobile users. We formulate this problem as a multi-item auction in order to maximize the profit of mobile network operator. We propose one robust optimization algorithm and two iterative algorithms to solve this problem. Finally, we investigate computation offloading problem in mobile edge computing. We focus on workload balancing problems to minimize the transmission latency and computation latency of computation offloading. We formulate this problem as a population game, in order to analyze the aggregate offloading decisions, and we propose two workload balancing algorithms based on evolutionary dynamics and revision protocols. Simulation results show the efficiency and robustness of our proposed methods
    corecore