3 research outputs found

    Tight Bounds for Collaborative PAC Learning via Multiplicative Weights

    Full text link
    We study the collaborative PAC learning problem recently proposed in Blum et al.~\cite{BHPQ17}, in which we have kk players and they want to learn a target function collaboratively, such that the learned function approximates the target function well on all players' distributions simultaneously. The quality of the collaborative learning algorithm is measured by the ratio between the sample complexity of the algorithm and that of the learning algorithm for a single distribution (called the overhead). We obtain a collaborative learning algorithm with overhead O(lnk)O(\ln k), improving the one with overhead O(ln2k)O(\ln^2 k) in \cite{BHPQ17}. We also show that an Ω(lnk)\Omega(\ln k) overhead is inevitable when kk is polynomial bounded by the VC dimension of the hypothesis class. Finally, our experimental study has demonstrated the superiority of our algorithm compared with the one in Blum et al. on real-world datasets.Comment: Accepted to NIPS 2018. 14 page

    One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning

    Full text link
    In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents. However, little is known about how collaboration protocols should take agents' incentives into account when allocating individual resources for communal learning in order to maintain such collaborations. Inspired by game theoretic notions, this paper introduces a framework for incentive-aware learning and data sharing in federated learning. Our stable and envy-free equilibria capture notions of collaboration in the presence of agents interested in meeting their learning objectives while keeping their own sample collection burden low. For example, in an envy-free equilibrium, no agent would wish to swap their sampling burden with any other agent and in a stable equilibrium, no agent would wish to unilaterally reduce their sampling burden. In addition to formalizing this framework, our contributions include characterizing the structural properties of such equilibria, proving when they exist, and showing how they can be computed. Furthermore, we compare the sample complexity of incentive-aware collaboration with that of optimal collaboration when one ignores agents' incentives

    Collaborative Top Distribution Identifications with Limited Interaction

    Full text link
    We consider the following problem in this paper: given a set of nn distributions, find the top-mm ones with the largest means. This problem is also called {\em top-mm arm identifications} in the literature of reinforcement learning, and has numerous applications. We study the problem in the collaborative learning model where we have multiple agents who can draw samples from the nn distributions in parallel. Our goal is to characterize the tradeoffs between the running time of learning process and the number of rounds of interaction between agents, which is very expensive in various scenarios. We give optimal time-round tradeoffs, as well as demonstrate complexity separations between top-11 arm identification and top-mm arm identifications for general mm and between fixed-time and fixed-confidence variants. As a byproduct, we also give an algorithm for selecting the distribution with the mm-th largest mean in the collaborative learning model.Comment: Accepted for presentation at FOCS 202
    corecore