36,512 research outputs found
Selective Pre-training for Private Fine-tuning
Suppose we want to train text prediction models in email clients or word
processors. The models must preserve the privacy of user data and adhere to a
specific fixed size to meet memory and inference time requirements. We
introduce a generic framework to solve this problem. Specifically, we are given
a public dataset and a private dataset
corresponding to a downstream task . How should we pre-train a fixed-size
model on and fine-tune it on such that
performance of with respect to is maximized and satisfies
differential privacy with respect to ? We show that pre-training
on a {\em subset} of dataset that brings the public distribution
closer to the private distribution is a crucial ingredient to maximize the
transfer learning abilities of after pre-training, especially in the
regimes where model sizes are relatively small. Besides performance
improvements, our framework also shows that with careful pre-training and
private fine-tuning, {\em smaller models} can match the performance of much
larger models, highlighting the promise of differentially private training as a
tool for model compression and efficiency
Private Model Compression via Knowledge Distillation
The soaring demand for intelligent mobile applications calls for deploying
powerful deep neural networks (DNNs) on mobile devices. However, the
outstanding performance of DNNs notoriously relies on increasingly complex
models, which in turn is associated with an increase in computational expense
far surpassing mobile devices' capacity. What is worse, app service providers
need to collect and utilize a large volume of users' data, which contain
sensitive information, to build the sophisticated DNN models. Directly
deploying these models on public mobile devices presents prohibitive privacy
risk. To benefit from the on-device deep learning without the capacity and
privacy concerns, we design a private model compression framework RONA.
Following the knowledge distillation paradigm, we jointly use hint learning,
distillation learning, and self learning to train a compact and fast neural
network. The knowledge distilled from the cumbersome model is adaptively
bounded and carefully perturbed to enforce differential privacy. We further
propose an elegant query sample selection method to reduce the number of
queries and control the privacy loss. A series of empirical evaluations as well
as the implementation on an Android mobile device show that RONA can not only
compress cumbersome models efficiently but also provide a strong privacy
guarantee. For example, on SVHN, when a meaningful
-differential privacy is guaranteed, the compact model trained
by RONA can obtain 20 compression ratio and 19 speed-up with
merely 0.97% accuracy loss.Comment: Conference version accepted by AAAI'1
Differentially Private Mixture of Generative Neural Networks
Generative models are used in a wide range of applications building on large
amounts of contextually rich information. Due to possible privacy violations of
the individuals whose data is used to train these models, however, publishing
or sharing generative models is not always viable. In this paper, we present a
novel technique for privately releasing generative models and entire
high-dimensional datasets produced by these models. We model the generator
distribution of the training data with a mixture of generative neural
networks. These are trained together and collectively learn the generator
distribution of a dataset. Data is divided into clusters, using a novel
differentially private kernel -means, then each cluster is given to separate
generative neural networks, such as Restricted Boltzmann Machines or
Variational Autoencoders, which are trained only on their own cluster using
differentially private gradient descent. We evaluate our approach using the
MNIST dataset, as well as call detail records and transit datasets, showing
that it produces realistic synthetic samples, which can also be used to
accurately compute arbitrary number of counting queries.Comment: A shorter version of this paper appeared at the 17th IEEE
International Conference on Data Mining (ICDM 2017). This is the full
version, published in IEEE Transactions on Knowledge and Data Engineering
(TKDE
- …