1,048 research outputs found

    Resource allocation in mobile edge cloud computing for data-intensive applications

    Get PDF
    Rapid advancement in the mobile telecommunications industry has motivated the development of mobile applications in a wide range of social and scientific domains. However, mobile computing (MC) platforms still have several constraints, such as limited computation resources, short battery life and high sensitivity to network capabilities. In order to overcome the limitations of mobile computing and benefit from the huge advancement in mobile telecommunications and the rapid revolution of distributed resources, mobile-aware computing models, such as mobile cloud computing (MCC) and mobile edge computing (MEC) have been proposed. The main problem is to decide on an application execution plan while satisfying quality of service (QoS) requirements and the current status of system networking and device energy. However, the role of application data in offloading optimisation has not been studied thoroughly, particularly with respect to how data size and distribution impact application offloading. This problem can be referred to as data-intensive mobile application offloading optimisation. To address this problem, this thesis presents novel optimisation frameworks, techniques and algorithms for mobile application resource allocation in mobile-aware computing environments. These frameworks and techniques are proposed to provide optimised solutions to schedule data intensive mobile applications. Experimental results show the ability of the proposed tools in optimising the scheduling and the execution of data intensive applications on various computing environments to meet application QoS requirements. Furthermore, the results clearly stated the significant contribution of the data size parameter on scheduling the execution of mobile applications. In addition, the thesis provides an analytical investigation of mobile-aware computing environments for a certain mobile application type. The investigation provides performance analysis to help users decide on target computation resources based on application structure, input data, and mobile network status

    System Optimisation for Multi-access Edge Computing Based on Deep Reinforcement Learning

    Get PDF
    Multi-access edge computing (MEC) is an emerging and important distributed computing paradigm that aims to extend cloud service to the network edge to reduce network traffic and service latency. Proper system optimisation and maintenance are crucial to maintaining high Quality-of-service (QoS) for end-users. However, with the increasing complexity of the architecture of MEC and mobile applications, effectively optimising MEC systems is non-trivial. Traditional optimisation methods are generally based on simplified mathematical models and fixed heuristics, which rely heavily on expert knowledge. As a consequence, when facing dynamic MEC scenarios, considerable human efforts and expertise are required to redesign the model and tune the heuristics, which is time-consuming. This thesis aims to develop deep reinforcement learning (DRL) methods to handle system optimisation problems in MEC. Instead of developing fixed heuristic algorithms for the problems, this thesis aims to design DRL-based methods that enable systems to learn optimal solutions on their own. This research demonstrates the effectiveness of DRL-based methods on two crucial system optimisation problems: task offloading and service migration. Specifically, this thesis first investigate the dependent task offloading problem that considers the inner dependencies of tasks. This research builds a DRL-based method combining sequence-to-sequence (seq2seq) neural network to address the problem. Experiment results demonstrate that our method outperforms the existing heuristic algorithms and achieves near-optimal performance. To further enhance the learning efficiency of the DRL-based task offloading method for unseen learning tasks, this thesis then integrates meta reinforcement learning to handle the task offloading problem. Our method can adapt fast to new environments with a small number of gradient updates and samples. Finally, this thesis exploits the DRL-based solution for the service migration problem in MEC considering user mobility. This research models the service migration problem as a Partially Observable Markov Decision Process (POMDP) and propose a tailored actor-critic algorithm combining Long-short Term Memory (LSTM) to solve the POMDP. Results from extensive experiments based on real-world mobility traces demonstrate that our method consistently outperforms both the heuristic and state-of-the-art learning-driven algorithms on various MEC scenarios

    A comprehensive survey on reinforcement-learning-based computation offloading techniques in Edge Computing Systems

    Get PDF
    Producción CientíficaIn recent years, the number of embedded computing devices connected to the Internet has exponentially increased. At the same time, new applications are becoming more complex and computationally demanding, which can be a problem for devices, especially when they are battery powered. In this context, the concepts of computation offloading and edge computing, which allow applications to be fully or partially offloaded and executed on servers close to the devices in the network, have arisen and received increasing attention. Then, the design of algorithms to make the decision of which applications or tasks should be offloaded, and where to execute them, is crucial. One of the options that has been gaining momentum lately is the use of Reinforcement Learning (RL) and, in particular, Deep Reinforcement Learning (DRL), which enables learning optimal or near-optimal offloading policies adapted to each particular scenario. Although the use of RL techniques to solve the computation offloading problem in edge systems has been covered by some surveys, it has been done in a limited way. For example, some surveys have analysed the use of RL to solve various networking problems, with computation offloading being one of them, but not the primary focus. Other surveys, on the other hand, have reviewed techniques to solve the computation offloading problem, being RL just one of the approaches considered. To the best of our knowledge, this is the first survey that specifically focuses on the use of RL and DRL techniques for computation offloading in edge computing system. We present a comprehensive and detailed survey, where we analyse and classify the research papers in terms of use cases, network and edge computing architectures, objectives, RL algorithms, decision-making approaches, and time-varying characteristics considered in the analysed scenarios. In particular, we include a series of tables to help researchers identify relevant papers based on specific features, and analyse which scenarios and techniques are most frequently considered in the literature. Finally, this survey identifies a number of research challenges, future directions and areas for further study.Consejería de Educación de la Junta de Castilla y León y FEDER (VA231P20)Ministerio de Ciencia e Innovación y Agencia Estatal de Investigación (Proyecto PID2020-112675RB-C42, PID2021-124463OBI00 y RED2018-102585-T, financiados por MCIN/AEI/10.13039/501100011033

    Exploiting Non-Causal CPU-State Information for Energy-Efficient Mobile Cooperative Computing

    Full text link
    Scavenging the idling computation resources at the enormous number of mobile devices can provide a powerful platform for local mobile cloud computing. The vision can be realized by peer-to-peer cooperative computing between edge devices, referred to as co-computing. This paper considers a co-computing system where a user offloads computation of input-data to a helper. The helper controls the offloading process for the objective of minimizing the user's energy consumption based on a predicted helper's CPU-idling profile that specifies the amount of available computation resource for co-computing. Consider the scenario that the user has one-shot input-data arrival and the helper buffers offloaded bits. The problem for energy-efficient co-computing is formulated as two sub-problems: the slave problem corresponding to adaptive offloading and the master one to data partitioning. Given a fixed offloaded data size, the adaptive offloading aims at minimizing the energy consumption for offloading by controlling the offloading rate under the deadline and buffer constraints. By deriving the necessary and sufficient conditions for the optimal solution, we characterize the structure of the optimal policies and propose algorithms for computing the policies. Furthermore, we show that the problem of optimal data partitioning for offloading and local computing at the user is convex, admitting a simple solution using the sub-gradient method. Last, the developed design approach for co-computing is extended to the scenario of bursty data arrivals at the user accounting for data causality constraints. Simulation results verify the effectiveness of the proposed algorithms.Comment: Submitted to possible journa

    Deep Meta Q-Learning based Multi-Task Offloading in Edge-Cloud Systems

    Get PDF
    Resource-Constrained Edge Devices Can Not Efficiently Handle the Explosive Growth of Mobile Data and the Increasing Computational Demand of Modern-Day User Applications. Task Offloading Allows the Migration of Complex Tasks from User Devices to the Remote Edge-Cloud Servers Thereby Reducing their Computational Burden and Energy Consumption While Also Improving the Efficiency of Task Processing. However, Obtaining the Optimal Offloading Strategy in a Multi-Task Offloading Decision-Making Process is an NP-Hard Problem. Existing Deep Learning Techniques with Slow Learning Rates and Weak Adaptability Are Not Suitable for Dynamic Multi-User Scenarios. in This Article, We Propose a Novel Deep Meta-Reinforcement Learning-Based Approach to the Multi-Task Offloading Problem using a Combination of First-Order Meta-Learning and Deep Q-Learning Methods. We Establish the Meta-Generalization Bounds for the Proposed Algorithm and Demonstrate that It Can Reduce the Time and Energy Consumption of IoT Applications by Up to 15%. through Rigorous Simulations, We Show that Our Method Achieves Near-Optimal Offloading Solutions While Also Being Able to Adapt to Dynamic Edge-Cloud Environments
    corecore