1,345 research outputs found

    A survey on intelligent computation offloading and pricing strategy in UAV-Enabled MEC network: Challenges and research directions

    Get PDF
    The lack of resource constraints for edge servers makes it difficult to simultaneously perform a large number of Mobile Devices’ (MDs) requests. The Mobile Network Operator (MNO) must then select how to delegate MD queries to its Mobile Edge Computing (MEC) server in order to maximize the overall benefit of admitted requests with varying latency needs. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligent (AI) can increase MNO performance because of their flexibility in deployment, high mobility of UAV, and efficiency of AI algorithms. There is a trade-off between the cost incurred by the MD and the profit received by the MNO. Intelligent computing offloading to UAV-enabled MEC, on the other hand, is a promising way to bridge the gap between MDs' limited processing resources, as well as the intelligent algorithms that are utilized for computation offloading in the UAV-MEC network and the high computing demands of upcoming applications. This study looks at some of the research on the benefits of computation offloading process in the UAV-MEC network, as well as the intelligent models that are utilized for computation offloading in the UAV-MEC network. In addition, this article examines several intelligent pricing techniques in different structures in the UAV-MEC network. Finally, this work highlights some important open research issues and future research directions of Artificial Intelligent (AI) in computation offloading and applying intelligent pricing strategies in the UAV-MEC network

    Dynamic Resource Allocation in Industrial Internet of Things (IIoT) using Machine Learning Approaches

    Get PDF
    In today's era of rapid smart equipment development and the Industrial Revolution, the application scenarios for Internet of Things (IoT) technology are expanding widely. The combination of IoT and industrial manufacturing systems gives rise to the Industrial IoT (IIoT). However, due to resource limitations such as computational units and battery capacity in IIoT devices (IIEs), it is crucial to execute computationally intensive tasks efficiently. The dynamic and continuous generation of tasks poses a significant challenge to managing the limited resources in the IIoT environment. This paper proposes a collaborative approach for optimal offloading and resource allocation of highly sensitive industrial IoT tasks. Firstly, the computation-intensive IIoT tasks are transformed into a directed acyclic graph. Then, task offloading is treated as an optimization problem, taking into account the models of processor resources and energy consumption for the offloading scheme. Lastly, a dynamic resource allocation approach is introduced to allocate computing resources to the edge-cloud server for the execution of computation-intensive tasks. The proposed joint offloading and scheduling (JOS) algorithm creates its DAG and prepare a offloading queue. This queue is designed using collaborative q-learning based reinforcement learning and allocate optimal resources to the JOS for execution of tasks present in offloading queue. For this machine learning approach is used to predict and allocate resources. The paper compares conventional and machine learning-based resource allocation methods. The machine learning approach performs better in terms of response time, delay, and energy consumption. The proposed algorithm shows that energy usage increases with task size, and response time increases with the number of users. Among the algorithms compared, JOS has the lowest waiting time, followed by DQN, while Q-learning performs the worst. Based on these findings, the paper recommends adopting the machine learning approach, specifically the JOS algorithm, for joint offloading and resource allocation

    Edge Offloading in Smart Grid

    Full text link
    The energy transition supports the shift towards more sustainable energy alternatives, paving towards decentralized smart grids, where the energy is generated closer to the point of use. The decentralized smart grids foresee novel data-driven low latency applications for improving resilience and responsiveness, such as peer-to-peer energy trading, microgrid control, fault detection, or demand response. However, the traditional cloud-based smart grid architectures are unable to meet the requirements of the new emerging applications such as low latency and high-reliability thus alternative architectures such as edge, fog, or hybrid need to be adopted. Moreover, edge offloading can play a pivotal role for the next-generation smart grid AI applications because it enables the efficient utilization of computing resources and addresses the challenges of increasing data generated by IoT devices, optimizing the response time, energy consumption, and network performance. However, a comprehensive overview of the current state of research is needed to support sound decisions regarding energy-related applications offloading from cloud to fog or edge, focusing on smart grid open challenges and potential impacts. In this paper, we delve into smart grid and computational distribution architec-tures, including edge-fog-cloud models, orchestration architecture, and serverless computing, and analyze the decision-making variables and optimization algorithms to assess the efficiency of edge offloading. Finally, the work contributes to a comprehensive understanding of the edge offloading in smart grid, providing a SWOT analysis to support decision making.Comment: to be submitted to journa

    Towards Autonomous Computer Networks in Support of Critical Systems

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    A computational offloading optimization scheme based on deep reinforcement learning in perceptual network

    Get PDF
    Currently, the deep integration of the Internet of Things (IoT) and edge computing has improved the computing capability of the IoT perception layer. Existing offloading techniques for edge computing suffer from the single problem of solidifying offloading policies. Based on this, combined with the characteristics of deep reinforcement learning, this paper investigates a computation offloading optimization scheme for the perception layer. The algorithm can adaptively adjust the computational task offloading policy of IoT terminals according to the network changes in the perception layer. Experiments show that the algorithm effectively improves the operational efficiency of the IoT perceptual layer and reduces the average task delay compared with other offloading algorithms
    • …
    corecore