708 research outputs found

    Dynamic Resource Allocation in Industrial Internet of Things (IIoT) using Machine Learning Approaches

    Get PDF
    In today's era of rapid smart equipment development and the Industrial Revolution, the application scenarios for Internet of Things (IoT) technology are expanding widely. The combination of IoT and industrial manufacturing systems gives rise to the Industrial IoT (IIoT). However, due to resource limitations such as computational units and battery capacity in IIoT devices (IIEs), it is crucial to execute computationally intensive tasks efficiently. The dynamic and continuous generation of tasks poses a significant challenge to managing the limited resources in the IIoT environment. This paper proposes a collaborative approach for optimal offloading and resource allocation of highly sensitive industrial IoT tasks. Firstly, the computation-intensive IIoT tasks are transformed into a directed acyclic graph. Then, task offloading is treated as an optimization problem, taking into account the models of processor resources and energy consumption for the offloading scheme. Lastly, a dynamic resource allocation approach is introduced to allocate computing resources to the edge-cloud server for the execution of computation-intensive tasks. The proposed joint offloading and scheduling (JOS) algorithm creates its DAG and prepare a offloading queue. This queue is designed using collaborative q-learning based reinforcement learning and allocate optimal resources to the JOS for execution of tasks present in offloading queue. For this machine learning approach is used to predict and allocate resources. The paper compares conventional and machine learning-based resource allocation methods. The machine learning approach performs better in terms of response time, delay, and energy consumption. The proposed algorithm shows that energy usage increases with task size, and response time increases with the number of users. Among the algorithms compared, JOS has the lowest waiting time, followed by DQN, while Q-learning performs the worst. Based on these findings, the paper recommends adopting the machine learning approach, specifically the JOS algorithm, for joint offloading and resource allocation

    A comprehensive survey on reinforcement-learning-based computation offloading techniques in Edge Computing Systems

    Get PDF
    Producción CientíficaIn recent years, the number of embedded computing devices connected to the Internet has exponentially increased. At the same time, new applications are becoming more complex and computationally demanding, which can be a problem for devices, especially when they are battery powered. In this context, the concepts of computation offloading and edge computing, which allow applications to be fully or partially offloaded and executed on servers close to the devices in the network, have arisen and received increasing attention. Then, the design of algorithms to make the decision of which applications or tasks should be offloaded, and where to execute them, is crucial. One of the options that has been gaining momentum lately is the use of Reinforcement Learning (RL) and, in particular, Deep Reinforcement Learning (DRL), which enables learning optimal or near-optimal offloading policies adapted to each particular scenario. Although the use of RL techniques to solve the computation offloading problem in edge systems has been covered by some surveys, it has been done in a limited way. For example, some surveys have analysed the use of RL to solve various networking problems, with computation offloading being one of them, but not the primary focus. Other surveys, on the other hand, have reviewed techniques to solve the computation offloading problem, being RL just one of the approaches considered. To the best of our knowledge, this is the first survey that specifically focuses on the use of RL and DRL techniques for computation offloading in edge computing system. We present a comprehensive and detailed survey, where we analyse and classify the research papers in terms of use cases, network and edge computing architectures, objectives, RL algorithms, decision-making approaches, and time-varying characteristics considered in the analysed scenarios. In particular, we include a series of tables to help researchers identify relevant papers based on specific features, and analyse which scenarios and techniques are most frequently considered in the literature. Finally, this survey identifies a number of research challenges, future directions and areas for further study.Consejería de Educación de la Junta de Castilla y León y FEDER (VA231P20)Ministerio de Ciencia e Innovación y Agencia Estatal de Investigación (Proyecto PID2020-112675RB-C42, PID2021-124463OBI00 y RED2018-102585-T, financiados por MCIN/AEI/10.13039/501100011033

    Mobile Edge Computing

    Get PDF
    This is an open access book. It offers comprehensive, self-contained knowledge on Mobile Edge Computing (MEC), which is a very promising technology for achieving intelligence in the next-generation wireless communications and computing networks. The book starts with the basic concepts, key techniques and network architectures of MEC. Then, we present the wide applications of MEC, including edge caching, 6G networks, Internet of Vehicles, and UAVs. In the last part, we present new opportunities when MEC meets blockchain, Artificial Intelligence, and distributed machine learning (e.g., federated learning). We also identify the emerging applications of MEC in pandemic, industrial Internet of Things and disaster management. The book allows an easy cross-reference owing to the broad coverage on both the principle and applications of MEC. The book is written for people interested in communications and computer networks at all levels. The primary audience includes senior undergraduates, postgraduates, educators, scientists, researchers, developers, engineers, innovators and research strategists

    Towards Autonomous Computer Networks in Support of Critical Systems

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Edge Computing for Internet of Things

    Get PDF
    The Internet-of-Things is becoming an established technology, with devices being deployed in homes, workplaces, and public areas at an increasingly rapid rate. IoT devices are the core technology of smart-homes, smart-cities, intelligent transport systems, and promise to optimise travel, reduce energy usage and improve quality of life. With the IoT prevalence, the problem of how to manage the vast volumes of data, wide variety and type of data generated, and erratic generation patterns is becoming increasingly clear and challenging. This Special Issue focuses on solving this problem through the use of edge computing. Edge computing offers a solution to managing IoT data through the processing of IoT data close to the location where the data is being generated. Edge computing allows computation to be performed locally, thus reducing the volume of data that needs to be transmitted to remote data centres and Cloud storage. It also allows decisions to be made locally without having to wait for Cloud servers to respond

    Deep Learning Empowered Task Offloading for Mobile Edge Computing in Urban Informatics

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Led by industrialization of smart cities, numerous interconnected mobile devices, and novel applications have emerged in the urban environment, providing great opportunities to realize industrial automation. In this context, autonomous driving is an attractive issue, which leverages large amounts of sensory information for smart navigation while posing intensive computation demands on resource constrained vehicles. Mobile edge computing (MEC) is a potential solution to alleviate the heavy burden on the devices. However, varying states of multiple edge servers as well as a variety of vehicular offloading modes make efficient task offloading a challenge. To cope with this challenge, we adopt a deep Q-learning approach for designing optimal offloading schemes, jointly considering selection of target server and determination of data transmission mode. Furthermore, we propose an efficient redundant offloading

    Vehicle Speed Aware Computing Task Offloading and Resource Allocation Based on Multi-Agent Reinforcement Learning in a Vehicular Edge Computing Network

    Full text link
    For in-vehicle application, the vehicles with different speeds have different delay requirements. However, vehicle speeds have not been extensively explored, which may cause mismatching between vehicle speed and its allocated computation and wireless resource. In this paper, we propose a vehicle speed aware task offloading and resource allocation strategy, to decrease the energy cost of executing tasks without exceeding the delay constraint. First, we establish the vehicle speed aware delay constraint model based on different speeds and task types. Then, the delay and energy cost of task execution in VEC server and local terminal are calculated. Next, we formulate a joint optimization of task offloading and resource allocation to minimize vehicles' energy cost subject to delay constraints. MADDPG method is employed to obtain offloading and resource allocation strategy. Simulation results show that our algorithm can achieve superior performance on energy cost and task completion delay.Comment: 8 pages, 6 figures, Accepted by IEEE International Conference on Edge Computing 202

    Computation Offloading and Scheduling in Edge-Fog Cloud Computing

    Get PDF
    Resource allocation and task scheduling in the Cloud environment faces many challenges, such as time delay, energy consumption, and security. Also, executing computation tasks of mobile applications on mobile devices (MDs) requires a lot of resources, so they can offload to the Cloud. But Cloud is far from MDs and has challenges as high delay and power consumption. Edge computing with processing near the Internet of Things (IoT) devices have been able to reduce the delay to some extent, but the problem is distancing itself from the Cloud. The fog computing (FC), with the placement of sensors and Cloud, increase the speed and reduce the energy consumption. Thus, FC is suitable for IoT applications. In this article, we review the resource allocation and task scheduling methods in Cloud, Edge and Fog environments, such as traditional, heuristic, and meta-heuristics. We also categorize the researches related to task offloading in Mobile Cloud Computing (MCC), Mobile Edge Computing (MEC), and Mobile Fog Computing (MFC). Our categorization criteria include the issue, proposed strategy, objectives, framework, and test environment.
    corecore