Communication-Efficient Distributed Machine Learning over Strategic Networks: A Two-Layer Game Approach

Abstract

This paper considers a game-theoretic framework for distributed learning problems over networks where communications between nodes are costly. In the proposed game, players decide both the learning parameters and the network structure for communications. The Nash equilibrium characterizes the tradeoff between the local performance and the global agreement of the learned classifiers. We introduce a two-layer algorithm to find the equilibrium. The algorithm features a joint learning process that integrates the iterative learning at each node and the network formation. We show that our game is equivalent to a generalized potential game in the setting of symmetric networks. We study the convergence of the proposed algorithm, analyze the network structures determined by our game, and show the improvement of the social welfare in comparison with the distributed learning over non-strategic networks. In the case study, we deal with streaming data and use telemonitoring of Parkinson's disease to corroborate the results.Comment: 20 pages, 9 figure

    Similar works

    Full text

    thumbnail-image

    Available Versions