The In-Network Computing (COIN) paradigm is a promising solution that
leverages unused network resources to perform some tasks to meet up with
computation-demanding applications, such as metaverse. In this vein, we
consider the metaverse partial computation offloading problem for multiple
subtasks in a COIN environment to minimise energy consumption and delay while
dynamically adjusting the offloading policy based on the changing computation
resources status. We prove that the problem is NP and thus transformed it into
two subproblems: task splitting problem (TSP) on the user side and task
offloading problem (TOP) on the COIN side. We modelled the TSP as an ordinal
potential game (OPG) and proposed a decentralised algorithm to obtain its Nash
Equilibrium (NE). Then, we model the TOP as Markov Decision Process (MDP)
proposed double deep Q-network (DDQN) to solve for the optimal offloading
policy. Unlike the conventional DDQN algorithm, where intelligent agents sample
offloading decisions randomly within a certain probability, our COIN agent
explores the NE of the TSP and the deep neural network. Finally, simulation
results show that our proposed model approach allows the COIN agent to update
its policies and make more informed decisions, leading to improved performance
over time compared to the traditional baseline.Comment: 14 pages, 9 figure