Cooperative Thresholded Lasso for Sparse Linear Bandit

Abstract

We present a novel approach to address the multi-agent sparse contextual linear bandit problem, in which the feature vectors have a high dimension dd whereas the reward function depends on only a limited set of features - precisely s0β‰ͺds_0 \ll d. Furthermore, the learning follows under information-sharing constraints. The proposed method employs Lasso regression for dimension reduction, allowing each agent to independently estimate an approximate set of main dimensions and share that information with others depending on the network's structure. The information is then aggregated through a specific process and shared with all agents. Each agent then resolves the problem with ridge regression focusing solely on the extracted dimensions. We represent algorithms for both a star-shaped network and a peer-to-peer network. The approaches effectively reduce communication costs while ensuring minimal cumulative regret per agent. Theoretically, we show that our proposed methods have a regret bound of order O(s0log⁑d+s0T)\mathcal{O}(s_0 \log d + s_0 \sqrt{T}) with high probability, where TT is the time horizon. To our best knowledge, it is the first algorithm that tackles row-wise distributed data in sparse linear bandits, achieving comparable performance compared to the state-of-the-art single and multi-agent methods. Besides, it is widely applicable to high-dimensional multi-agent problems where efficient feature extraction is critical for minimizing regret. To validate the effectiveness of our approach, we present experimental results on both synthetic and real-world datasets

    Similar works

    Full text

    thumbnail-image

    Available Versions