We present a novel approach to address the multi-agent sparse contextual
linear bandit problem, in which the feature vectors have a high dimension d
whereas the reward function depends on only a limited set of features -
precisely s0ββͺd. Furthermore, the learning follows under
information-sharing constraints. The proposed method employs Lasso regression
for dimension reduction, allowing each agent to independently estimate an
approximate set of main dimensions and share that information with others
depending on the network's structure. The information is then aggregated
through a specific process and shared with all agents. Each agent then resolves
the problem with ridge regression focusing solely on the extracted dimensions.
We represent algorithms for both a star-shaped network and a peer-to-peer
network. The approaches effectively reduce communication costs while ensuring
minimal cumulative regret per agent. Theoretically, we show that our proposed
methods have a regret bound of order O(s0βlogd+s0βTβ)
with high probability, where T is the time horizon. To our best knowledge, it
is the first algorithm that tackles row-wise distributed data in sparse linear
bandits, achieving comparable performance compared to the state-of-the-art
single and multi-agent methods. Besides, it is widely applicable to
high-dimensional multi-agent problems where efficient feature extraction is
critical for minimizing regret. To validate the effectiveness of our approach,
we present experimental results on both synthetic and real-world datasets