91 research outputs found

    Private Learning Implies Online Learning: An Efficient Reduction

    Full text link
    We study the relationship between the notions of differentially private learning and online learning in games. Several recent works have shown that differentially private learning implies online learning, but an open problem of Neel, Roth, and Wu \cite{NeelAaronRoth2018} asks whether this implication is {\it efficient}. Specifically, does an efficient differentially private learner imply an efficient online learner? In this paper we resolve this open question in the context of pure differential privacy. We derive an efficient black-box reduction from differentially private learning to online learning from expert advice

    The unstable formula theorem revisited

    Full text link
    We first prove that Littlestone classes, those which model theorists call stable, characterize learnability in a new statistical model: a learner in this new setting outputs the same hypothesis, up to measure zero, with probability one, after a uniformly bounded number of revisions. This fills a certain gap in the literature, and sets the stage for an approximation theorem characterizing Littlestone classes in terms of a range of learning models, by analogy to definability of types in model theory. We then give a complete analogue of Shelah's celebrated (and perhaps a priori untranslatable) Unstable Formula Theorem in the learning setting, with algorithmic arguments taking the place of the infinite

    Sample Complexity Bounds on Differentially Private Learning via Communication Complexity

    Full text link
    In this work we analyze the sample complexity of classification by differentially private algorithms. Differential privacy is a strong and well-studied notion of privacy introduced by Dwork et al. (2006) that ensures that the output of an algorithm leaks little information about the data point provided by any of the participating individuals. Sample complexity of private PAC and agnostic learning was studied in a number of prior works starting with (Kasiviswanathan et al., 2008) but a number of basic questions still remain open, most notably whether learning with privacy requires more samples than learning without privacy. We show that the sample complexity of learning with (pure) differential privacy can be arbitrarily higher than the sample complexity of learning without the privacy constraint or the sample complexity of learning with approximate differential privacy. Our second contribution and the main tool is an equivalence between the sample complexity of (pure) differentially private learning of a concept class CC (or SCDP(C)SCDP(C)) and the randomized one-way communication complexity of the evaluation problem for concepts from CC. Using this equivalence we prove the following bounds: 1. SCDP(C)=Ω(LDim(C))SCDP(C) = \Omega(LDim(C)), where LDim(C)LDim(C) is the Littlestone's (1987) dimension characterizing the number of mistakes in the online-mistake-bound learning model. Known bounds on LDim(C)LDim(C) then imply that SCDP(C)SCDP(C) can be much higher than the VC-dimension of CC. 2. For any tt, there exists a class CC such that LDim(C)=2LDim(C)=2 but SCDP(C)tSCDP(C) \geq t. 3. For any tt, there exists a class CC such that the sample complexity of (pure) α\alpha-differentially private PAC learning is Ω(t/α)\Omega(t/\alpha) but the sample complexity of the relaxed (α,β)(\alpha,\beta)-differentially private PAC learning is O(log(1/β)/α)O(\log(1/\beta)/\alpha). This resolves an open problem of Beimel et al. (2013b).Comment: Extended abstract appears in Conference on Learning Theory (COLT) 201

    Do you pay for Privacy in Online learning?

    Full text link
    Online learning, in the mistake bound model, is one of the most fundamental concepts in learning theory. Differential privacy, instead, is the most widely used statistical concept of privacy in the machine learning community. It is thus clear that defining learning problems that are online differentially privately learnable is of great interest. In this paper, we pose the question on if the two problems are equivalent from a learning perspective, i.e., is privacy for free in the online learning framework?Comment: This is an updated version with i) clearer problem statements especially in proposed Theorem 1 and ii) clearer discussion of existing work especially Golowich and Livni (2021). Conference on Learning Theory. PMLR, 202

    A Unified Characterization of Private Learnability via Graph Theory

    Full text link
    We provide a unified framework for characterizing pure and approximate differentially private (DP) learnabiliity. The framework uses the language of graph theory: for a concept class H\mathcal{H}, we define the contradiction graph GG of H\mathcal{H}. It vertices are realizable datasets, and two datasets S,SS,S' are connected by an edge if they contradict each other (i.e., there is a point xx that is labeled differently in SS and SS'). Our main finding is that the combinatorial structure of GG is deeply related to learning H\mathcal{H} under DP. Learning H\mathcal{H} under pure DP is captured by the fractional clique number of GG. Learning H\mathcal{H} under approximate DP is captured by the clique number of GG. Consequently, we identify graph-theoretic dimensions that characterize DP learnability: the clique dimension and fractional clique dimension. Along the way, we reveal properties of the contradiction graph which may be of independent interest. We also suggest several open questions and directions for future research
    corecore