2 research outputs found

    Graph signal sampling via reinforcement learning

    Get PDF
    Graph signal sampling is one the major problems in graph signal processing and arises in a variety of practical applications, such as data compression, image denoising and social network analysis. In this thesis we formulate graph signal sampling as a reinforcement learning (RL) problem, which unleashes powerful methods developed recently within RL. Within our approach the signal sampling is carried out by an agent which crawls over the empirical graph and selects the most relevant graph nodes to sample, i.e., determines the corresponding graph signal values. Overall, the goal of the agent is to select signal samples which allow for the smallest graph signal recovery error. The behavior of the sampling agent is described using a policy which determines whether or not a particular node should be sampled. The policy is continuously adjusted which implies an inherent robustness to changes in the data structure. We propose two RL-based sampling algorithms and evaluate their performance by means of illustrative numerical experiments. After this we conduct elaborative discussion on strengths and weaknesses of the proposed solutions and explain phenomenons observed during simulations. Based on the simulation results, we identify some major challenges related to application of RL to graph signal sampling and propose possible solutions leading to prospects for the future work
    corecore