26 research outputs found

    Differentially Private Mixture of Generative Neural Networks

    Get PDF
    Generative models are used in a wide range of applications building on large amounts of contextually rich information. Due to possible privacy violations of the individuals whose data is used to train these models, however, publishing or sharing generative models is not always viable. In this paper, we present a novel technique for privately releasing generative models and entire high-dimensional datasets produced by these models. We model the generator distribution of the training data with a mixture of kk generative neural networks. These are trained together and collectively learn the generator distribution of a dataset. Data is divided into kk clusters, using a novel differentially private kernel kk-means, then each cluster is given to separate generative neural networks, such as Restricted Boltzmann Machines or Variational Autoencoders, which are trained only on their own cluster using differentially private gradient descent. We evaluate our approach using the MNIST dataset, as well as call detail records and transit datasets, showing that it produces realistic synthetic samples, which can also be used to accurately compute arbitrary number of counting queries.Comment: A shorter version of this paper appeared at the 17th IEEE International Conference on Data Mining (ICDM 2017). This is the full version, published in IEEE Transactions on Knowledge and Data Engineering (TKDE

    A Learning-based Declarative Privacy-Preserving Framework for Federated Data Management

    Full text link
    It is challenging to balance the privacy and accuracy for federated query processing over multiple private data silos. In this work, we will demonstrate an end-to-end workflow for automating an emerging privacy-preserving technique that uses a deep learning model trained using the Differentially-Private Stochastic Gradient Descent (DP-SGD) algorithm to replace portions of actual data to answer a query. Our proposed novel declarative privacy-preserving workflow allows users to specify "what private information to protect" rather than "how to protect". Under the hood, the system automatically chooses query-model transformation plans as well as hyper-parameters. At the same time, the proposed workflow also allows human experts to review and tune the selected privacy-preserving mechanism for audit/compliance, and optimization purposes

    Building and evaluating privacy-preserving data processing systems

    Get PDF
    Large-scale data processing prompts a number of important challenges, including guaranteeing that collected or published data is not misused, preventing disclosure of sensitive information, and deploying privacy protection frameworks that support usable and scalable services. In this dissertation, we study and build systems geared for privacy-friendly data processing, enabling computational scenarios and applications where potentially sensitive data can be used to extract useful knowledge, and which would otherwise be impossible without such strong privacy guarantees. For instance, we show how to privately and efficiently aggregate data from many sources and large streams, and how to use the aggregates to extract useful statistics and train simple machine learning models. We also present a novel technique for privately releasing generative machine learning models and entire high-dimensional datasets produced by these models. Finally, we demonstrate that the data used by participants in training generative and collaborative learning models may be vulnerable to inference attacks and discuss possible mitigation strategies

    Secure and Private Federated Learning at Large Scale

    Get PDF
    We present novel techniques to forward the goal of secure and private machine learning. The widespread use of machine learning poses a serious privacy risk to the data used to train models. Data owners are forced to trust that aggregators will keep their data secure, and that released models will maintain their privacy. The works presented in this thesis strive to solve both problems through secure multiparty computation and differential privacy based approaches. The novel FLDP protocol leverages the learning with errors (LWE) problem to mask model updates and implements an efficient secure aggregation protocol, which easily scales to large models. Continuing on the vein of scalable secure aggregation the SHARD protocol utilizes a multi-layered secret sharing scheme to perform efficient secure aggregation on very large federations. Together, these protocols allow a federation to train models without requiring data owners to trust an aggregator. In order to ensure the privacy of trained models, we propose immediate sensitivity, a framework for reducing membership inference attack efficacy against neural networks. Immediate sensitivity uses a differential privacy inspired additive noise mechanism to privatize model updates during training. By determining the scale of the noise through the gradient of the gradient, immediate sensitivity trains more accurate models than differentially private gradient clipping approach. Each of these works is supported by extensive experimental evaluation
    corecore