890 research outputs found

    Learning Privately with Labeled and Unlabeled Examples

    Full text link
    a

    Statistical Active Learning Algorithms for Noise Tolerance and Differential Privacy

    Full text link
    We describe a framework for designing efficient active learning algorithms that are tolerant to random classification noise and are differentially-private. The framework is based on active learning algorithms that are statistical in the sense that they rely on estimates of expectations of functions of filtered random examples. It builds on the powerful statistical query framework of Kearns (1993). We show that any efficient active statistical learning algorithm can be automatically converted to an efficient active learning algorithm which is tolerant to random classification noise as well as other forms of "uncorrelated" noise. The complexity of the resulting algorithms has information-theoretically optimal quadratic dependence on 1/(12η)1/(1-2\eta), where η\eta is the noise rate. We show that commonly studied concept classes including thresholds, rectangles, and linear separators can be efficiently actively learned in our framework. These results combined with our generic conversion lead to the first computationally-efficient algorithms for actively learning some of these concept classes in the presence of random classification noise that provide exponential improvement in the dependence on the error ϵ\epsilon over their passive counterparts. In addition, we show that our algorithms can be automatically converted to efficient active differentially-private algorithms. This leads to the first differentially-private active learning algorithms with exponential label savings over the passive case.Comment: Extended abstract appears in NIPS 201

    Characterizing the Sample Complexity of Private Learners

    Full text link
    In 2008, Kasiviswanathan et al. defined private learning as a combination of PAC learning and differential privacy. Informally, a private learner is applied to a collection of labeled individual information and outputs a hypothesis while preserving the privacy of each individual. Kasiviswanathan et al. gave a generic construction of private learners for (finite) concept classes, with sample complexity logarithmic in the size of the concept class. This sample complexity is higher than what is needed for non-private learners, hence leaving open the possibility that the sample complexity of private learning may be sometimes significantly higher than that of non-private learning. We give a combinatorial characterization of the sample size sufficient and necessary to privately learn a class of concepts. This characterization is analogous to the well known characterization of the sample complexity of non-private learning in terms of the VC dimension of the concept class. We introduce the notion of probabilistic representation of a concept class, and our new complexity measure RepDim corresponds to the size of the smallest probabilistic representation of the concept class. We show that any private learning algorithm for a concept class C with sample complexity m implies RepDim(C)=O(m), and that there exists a private learning algorithm with sample complexity m=O(RepDim(C)). We further demonstrate that a similar characterization holds for the database size needed for privately computing a large class of optimization problems and also for the well studied problem of private data release

    A Model of Labeling with Horizontal Differentiation and Cost Variability

    Get PDF
    We study optimal disclosure of variety by a multi-product firm with random costs. In our model there are two varieties that are horizontally differentiated and differ in overall quality, but buyers cannot distinguish between them without labels. The equilibrium prices for labeled varieties are increasing functions of the absolute value of the cost differential and do not reveal which variety is cheaper to produce. Nondisclosure is most common when there is moderate uncertainty about the relative input cost, not too much idiosyncrasy in consumer valuations, and not too much difference in quality across varieties. Although mandatory disclosure of variety benefits consumers, it decreases expected welfare when relative input cost variability is large and quality asymmetry is small. The cheaper variety tends to be oversupplied (undersupplied) when disclosure is voluntary (mandatory). Competition among multi-product firms that source inputs in the same upstream market may not lead to more disclosure.Agribusiness, Agricultural and Food Policy, Food Consumption/Nutrition/Food Safety, Industrial Organization, Marketing, information, labeling, quality disclosure, product differentiation,

    Private Semi-supervised Knowledge Transfer for Deep Learning from Noisy Labels

    Full text link
    Deep learning models trained on large-scale data have achieved encouraging performance in many real-world tasks. Meanwhile, publishing those models trained on sensitive datasets, such as medical records, could pose serious privacy concerns. To counter these issues, one of the current state-of-the-art approaches is the Private Aggregation of Teacher Ensembles, or PATE, which achieved promising results in preserving the utility of the model while providing a strong privacy guarantee. PATE combines an ensemble of "teacher models" trained on sensitive data and transfers the knowledge to a "student" model through the noisy aggregation of teachers' votes for labeling unlabeled public data which the student model will be trained on. However, the knowledge or voted labels learned by the student are noisy due to private aggregation. Learning directly from noisy labels can significantly impact the accuracy of the student model. In this paper, we propose the PATE++ mechanism, which combines the current advanced noisy label training mechanisms with the original PATE framework to enhance its accuracy. A novel structure of Generative Adversarial Nets (GANs) is developed in order to integrate them effectively. In addition, we develop a novel noisy label detection mechanism for semi-supervised model training to further improve student model performance when training with noisy labels. We evaluate our method on Fashion-MNIST and SVHN to show the improvements on the original PATE on all measures

    The Power of Localization for Efficiently Learning Linear Separators with Noise

    Full text link
    We introduce a new approach for designing computationally efficient learning algorithms that are tolerant to noise, and demonstrate its effectiveness by designing algorithms with improved noise tolerance guarantees for learning linear separators. We consider both the malicious noise model and the adversarial label noise model. For malicious noise, where the adversary can corrupt both the label and the features, we provide a polynomial-time algorithm for learning linear separators in d\Re^d under isotropic log-concave distributions that can tolerate a nearly information-theoretically optimal noise rate of η=Ω(ϵ)\eta = \Omega(\epsilon). For the adversarial label noise model, where the distribution over the feature vectors is unchanged, and the overall probability of a noisy label is constrained to be at most η\eta, we also give a polynomial-time algorithm for learning linear separators in d\Re^d under isotropic log-concave distributions that can handle a noise rate of η=Ω(ϵ)\eta = \Omega\left(\epsilon\right). We show that, in the active learning model, our algorithms achieve a label complexity whose dependence on the error parameter ϵ\epsilon is polylogarithmic. This provides the first polynomial-time active learning algorithm for learning linear separators in the presence of malicious noise or adversarial label noise.Comment: Contains improved label complexity analysis communicated to us by Steve Hannek
    corecore