572 research outputs found

    Delay-tolerant sequential decision making for task offloading in mobile edge computing environments

    Get PDF
    In recent years, there has been a significant increase in the use of mobile devices and their applications. Meanwhile, cloud computing has been considered as the latest generation of computing infrastructure. There has also been a transformation in cloud computing ideas and their implementation so as to meet the demand for the latest applications. mobile edge computing (MEC) is a computing paradigm that provides cloud services near to the users at the edge of the network. Given the movement of mobile nodes between different MEC servers, the main aim would be the connection to the best server and at the right time in terms of the load of the server in order to optimize the quality of service (QoS) of the mobile nodes. We tackle the offloading decision making problem by adopting the principles of optimal stopping theory (OST) to minimize the execution delay in a sequential decision manner. A performance evaluation is provided using real world data sets with baseline deterministic and stochastic offloading models. The results show that our approach significantly minimizes the execution delay for task execution and the results are closer to the optimal solution than other offloading methods

    A survey on intelligent computation offloading and pricing strategy in UAV-Enabled MEC network: Challenges and research directions

    Get PDF
    The lack of resource constraints for edge servers makes it difficult to simultaneously perform a large number of Mobile Devices’ (MDs) requests. The Mobile Network Operator (MNO) must then select how to delegate MD queries to its Mobile Edge Computing (MEC) server in order to maximize the overall benefit of admitted requests with varying latency needs. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligent (AI) can increase MNO performance because of their flexibility in deployment, high mobility of UAV, and efficiency of AI algorithms. There is a trade-off between the cost incurred by the MD and the profit received by the MNO. Intelligent computing offloading to UAV-enabled MEC, on the other hand, is a promising way to bridge the gap between MDs' limited processing resources, as well as the intelligent algorithms that are utilized for computation offloading in the UAV-MEC network and the high computing demands of upcoming applications. This study looks at some of the research on the benefits of computation offloading process in the UAV-MEC network, as well as the intelligent models that are utilized for computation offloading in the UAV-MEC network. In addition, this article examines several intelligent pricing techniques in different structures in the UAV-MEC network. Finally, this work highlights some important open research issues and future research directions of Artificial Intelligent (AI) in computation offloading and applying intelligent pricing strategies in the UAV-MEC network

    Deep Meta Q-Learning based Multi-Task Offloading in Edge-Cloud Systems

    Get PDF
    Resource-Constrained Edge Devices Can Not Efficiently Handle the Explosive Growth of Mobile Data and the Increasing Computational Demand of Modern-Day User Applications. Task Offloading Allows the Migration of Complex Tasks from User Devices to the Remote Edge-Cloud Servers Thereby Reducing their Computational Burden and Energy Consumption While Also Improving the Efficiency of Task Processing. However, Obtaining the Optimal Offloading Strategy in a Multi-Task Offloading Decision-Making Process is an NP-Hard Problem. Existing Deep Learning Techniques with Slow Learning Rates and Weak Adaptability Are Not Suitable for Dynamic Multi-User Scenarios. in This Article, We Propose a Novel Deep Meta-Reinforcement Learning-Based Approach to the Multi-Task Offloading Problem using a Combination of First-Order Meta-Learning and Deep Q-Learning Methods. We Establish the Meta-Generalization Bounds for the Proposed Algorithm and Demonstrate that It Can Reduce the Time and Energy Consumption of IoT Applications by Up to 15%. through Rigorous Simulations, We Show that Our Method Achieves Near-Optimal Offloading Solutions While Also Being Able to Adapt to Dynamic Edge-Cloud Environments

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Computation offloading in mobile edge computing: an optimal stopping theory approach

    Get PDF
    In recent years, new mobile devices and applications with different functionalities and uses, such as drones, Autonomous Vehicles (AV) and highly advanced smartphones have emerged. Such devices are now able to launch applications such as augmented and virtual reality, intensive contextual data processing, intelligent vehicle control, traffic management, data mining and interactive applications. Although these mobile nodes have the computing and communication capabilities to run such applications, they remain unable to efficiently handle them mainly due to the significant processing required over relatively short timescales. Additionally, they consume a considerable amount of battery power. Such limitations have motivated the idea of computation offloading where computing tasks are sent to the Cloud instead of executing it locally at the mobile node. The technical concept of this idea is referred to as Mobile Cloud Computing (MCC). However, using the Cloud for computational task offloading of mobile applications introduces a significant latency and adds additional load to the radio and backhaul of the mobile networks. To cope with these challenges, the Cloud’s resources are being deployed near to the users at the Edge of the network in places such as mobile networks at the Base Station (BS), or indoor locations such as Wi-Fi and 3G/4G access points. This architecture is referred to as Mobile Edge Computing or Multi-access Edge Computing (MEC). Computation offloading in such a setting faces the challenge of deciding which time and server to offload computational tasks to. This dissertation aims at designing time-optimised task offloading decision-making algorithms in MEC environments. This will be done to find the optimal time for task offloading. The random variables that can influence the expected processing time at the MEC server are investigated using various probability distributions and representations. In the context being assessed, while the mobile node is sequentially roaming (connecting) through a set of MEC servers, it has to locally and autonomously decide which server should be used for offloading in order to perform the computing task. To deal with this sequential problem, the considered offloading decision-making is modelled as an optimal stopping time problem adopting the principles of Optimal Stopping Theory (OST). Three assessment approaches including simulation approach, real data sets and an actual implementation in real devices, are used to evaluate the performance of the models. The results indicate that OST-based offloading strategies can play an important role in optimising the task offloading decision. In particular, in the simulation approach, the average processing time achieved by the proposed models are higher than the Optimal by only 10%. In the real data set, the models are still near optimal with only 25% difference compared to the Optimal while in the real implementation, the models, most of the time, select the Optimal node for processing the task. Furthermore, the presented algorithms are lightweight, local and can hence be implemented on mobile nodes (for instance, vehicles or smart phones)

    Fog Computing

    Get PDF
    Everything that is not a computer, in the traditional sense, is being connected to the Internet. These devices are also referred to as the Internet of Things and they are pressuring the current network infrastructure. Not all devices are intensive data producers and part of them can be used beyond their original intent by sharing their computational resources. The combination of those two factors can be used either to perform insight over the data closer where is originated or extend into new services by making available computational resources, but not exclusively, at the edge of the network. Fog computing is a new computational paradigm that provides those devices a new form of cloud at a closer distance where IoT and other devices with connectivity capabilities can offload computation. In this dissertation, we have explored the fog computing paradigm, and also comparing with other paradigms, namely cloud, and edge computing. Then, we propose a novel architecture that can be used to form or be part of this new paradigm. The implementation was tested on two types of applications. The first application had the main objective of demonstrating the correctness of the implementation while the other application, had the goal of validating the characteristics of fog computing.Tudo o que não é um computador, no sentido tradicional, está sendo conectado à Internet. Esses dispositivos também são chamados de Internet das Coisas e estão pressionando a infraestrutura de rede atual. Nem todos os dispositivos são produtores intensivos de dados e parte deles pode ser usada além de sua intenção original, compartilhando seus recursos computacionais. A combinação desses dois fatores pode ser usada para realizar processamento dos dados mais próximos de onde são originados ou estender para a criação de novos serviços, disponibilizando recursos computacionais periféricos à rede. Fog computing é um novo paradigma computacional que fornece a esses dispositivos uma nova forma de nuvem a uma distância mais próxima, onde “Things” e outros dispositivos com recursos de conectividade possam delegar processamento. Nesta dissertação, exploramos fog computing e também comparamos com outros paradigmas, nomeadamente cloud e edge computing. Em seguida, propomos uma nova arquitetura que pode ser usada para formar ou fazer parte desse novo paradigma. A implementação foi testada em dois tipos de aplicativos. A primeira aplicação teve o objetivo principal de demonstrar a correção da implementação, enquanto a outra aplicação, teve como objetivo validar as características de fog computing

    Mobile data and computation offloading in mobile cloud computing

    Get PDF
    Le trafic mobile augmente considérablement en raison de la popularité des appareils mobiles et des applications mobiles. Le déchargement de données mobiles est une solution permettant de réduire la congestion du réseau cellulaire. Le déchargement de calcul mobile peut déplacer les tâches de calcul d'appareils mobiles vers le cloud. Dans cette thèse, nous étudions d'abord le problème du déchargement de données mobiles dans l'architecture du cloud computing mobile. Afin de minimiser les coûts de transmission des données, nous formulons le processus de déchargement des données sous la forme d'un processus de décision de Markov à horizon fini. Nous proposons deux algorithmes de déchargement des données pour un coût minimal. Ensuite, nous considérons un marché sur lequel un opérateur de réseau mobile peut vendre de la bande passante à des utilisateurs mobiles. Nous formulons ce problème sous la forme d'une enchère comportant plusieurs éléments afin de maximiser les bénéfices de l'opérateur de réseau mobile. Nous proposons un algorithme d'optimisation robuste et deux algorithmes itératifs pour résoudre ce problème. Enfin, nous nous concentrons sur les problèmes d'équilibrage de charge afin de minimiser la latence du déchargement des calculs. Nous formulons ce problème comme un jeu de population. Nous proposons deux algorithmes d'équilibrage de la charge de travail basés sur la dynamique évolutive et des protocoles de révision. Les résultats de la simulation montrent l'efficacité et la robustesse des méthodes proposées.Global mobile traffic is increasing dramatically due to the popularity of smart mobile devices and data hungry mobile applications. Mobile data offloading is considered as a promising solution to alleviate congestion in cellular network. Mobile computation offloading can move computation intensive tasks and large data storage from mobile devices to cloud. In this thesis, we first study mobile data offloading problem under the architecture of mobile cloud computing. In order to minimize the overall cost for data delivery, we formulate the data offloading process, as a finite horizon Markov decision process, and we propose two data offloading algorithms to achieve minimal communication cost. Then, we consider a mobile data offloading market where mobile network operator can sell bandwidth to mobile users. We formulate this problem as a multi-item auction in order to maximize the profit of mobile network operator. We propose one robust optimization algorithm and two iterative algorithms to solve this problem. Finally, we investigate computation offloading problem in mobile edge computing. We focus on workload balancing problems to minimize the transmission latency and computation latency of computation offloading. We formulate this problem as a population game, in order to analyze the aggregate offloading decisions, and we propose two workload balancing algorithms based on evolutionary dynamics and revision protocols. Simulation results show the efficiency and robustness of our proposed methods

    Edge/Fog Computing Technologies for IoT Infrastructure

    Get PDF
    The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure. Aiming to explore the recent research and development on fog/edge computing technologies for building an IoT infrastructure, this book collected 10 articles. The selected articles cover diverse topics such as resource management, service provisioning, task offloading and scheduling, container orchestration, and security on edge/fog computing infrastructure, which can help to grasp recent trends, as well as state-of-the-art algorithms of fog/edge computing technologies

    Edge Computing for Internet of Things

    Get PDF
    The Internet-of-Things is becoming an established technology, with devices being deployed in homes, workplaces, and public areas at an increasingly rapid rate. IoT devices are the core technology of smart-homes, smart-cities, intelligent transport systems, and promise to optimise travel, reduce energy usage and improve quality of life. With the IoT prevalence, the problem of how to manage the vast volumes of data, wide variety and type of data generated, and erratic generation patterns is becoming increasingly clear and challenging. This Special Issue focuses on solving this problem through the use of edge computing. Edge computing offers a solution to managing IoT data through the processing of IoT data close to the location where the data is being generated. Edge computing allows computation to be performed locally, thus reducing the volume of data that needs to be transmitted to remote data centres and Cloud storage. It also allows decisions to be made locally without having to wait for Cloud servers to respond
    corecore