2 research outputs found

    Vehicle Speed Aware Computing Task Offloading and Resource Allocation Based on Multi-Agent Reinforcement Learning in a Vehicular Edge Computing Network

    Full text link
    For in-vehicle application, the vehicles with different speeds have different delay requirements. However, vehicle speeds have not been extensively explored, which may cause mismatching between vehicle speed and its allocated computation and wireless resource. In this paper, we propose a vehicle speed aware task offloading and resource allocation strategy, to decrease the energy cost of executing tasks without exceeding the delay constraint. First, we establish the vehicle speed aware delay constraint model based on different speeds and task types. Then, the delay and energy cost of task execution in VEC server and local terminal are calculated. Next, we formulate a joint optimization of task offloading and resource allocation to minimize vehicles' energy cost subject to delay constraints. MADDPG method is employed to obtain offloading and resource allocation strategy. Simulation results show that our algorithm can achieve superior performance on energy cost and task completion delay.Comment: 8 pages, 6 figures, Accepted by IEEE International Conference on Edge Computing 202

    Machine Learning for Intelligent IoT Networks with Edge Computing

    Get PDF
    The intelligent Internet of Things (IoT) network is envisioned to be the internet of intelligent things. In this paradigm, billions of end devices with internet connectivity will provide interactive intelligence and revolutionise the current wireless communications. In the intelligent IoT networks, the unprecedented volume and variety of data is generated, making centralized cloud computing ine cient or even infeasible due to network congestion, resource-limited IoT devices, ultra-low latency applications and spectrum scarcity. Edge computing has been proposed to overcome these issues by pushing centralized communication and computation resource physically and logically closer to data providers and end users. However, compared with a cloud server, an edge server only provides nite computation and spectrum resource, making proper data processing and e cient resource allocation necessary. Machine learning techniques have been developed to solve the dynamic and complex problems and big data analysis in IoT networks. Speci - cally, Reinforcement Learning (RL) has been widely explored to address the dynamic decision making problems, which motivates the research on machine learning enabled computation o oading and resource management. In this thesis, several original contributions are presented to nd the solutions and address the challenges. First, e cient spectrum and power allocation are investigated for computation o oading in wireless powered IoT networks. The IoT users o oad all the collected data to the central server for better data processing experience. Then a matching theory-based e cient channel allocation algorithm and a RL-based power allocation mechanism are proposed. Second, the joint optimization problem of computation o oading and resource allocation is investigated for the IoT edge computing networks via machine learning techniques. The IoT users choose to o oad the intensive computation tasks to the edge server while keep simple task execution locally. In this case, a centralized user clustering algorithm is rst proposed as a pre-step to group the IoT users into di erent clusters according to user priorities for achieving spectrum allocation. Then the joint computation o oading, computation resource and power allocation for each IoT user is formulated as an RL framework and solved by proposing a deep Q-network based computation o oading algorithm. At last, to solve the simultaneous multiuser computation o oading problem, a stochastic game is exploited to formulate the joint problem of computation o oading mechanism of multiple sel sh users and resource (including spectrum, computation and radio access technologies resources) allocation into a non-cooperative multiuser computation o oading game. Therefore, a multi-agent RL framework is developed to solve the formulated game by proposing an independent learners based multi-agent Q-learning algorithm
    corecore