5 research outputs found
Hyperparameters and neural architectures in differentially private deep learning
Using machine learning to improve health care has gained popularity. However, most research in machine learning for health has ignored privacy attacks against the models. Differential privacy (DP) is the state-of-the-art concept for protecting individuals' data from privacy attacks. Using optimization algorithms such as the DP stochastic gradient descent (DP-SGD), one can train deep learning models under DP guarantees. This thesis analyzes the impact of changes to the hyperparameters and the neural architecture on the utility/privacy tradeoff, the main tradeoff in DP, for models trained on the MIMIC-III dataset. The analyzed hyperparameters are the noise multiplier, clipping bound, and batch size. The experiments examine neural architecture changes regarding the depth and width of the model, activation functions, and group normalization. The thesis reports the impact of the individual changes independently of other factors using Bayesian optimization and thus overcomes the limitations of earlier work. For the analyzed models, the utility is more sensitive to changes to the clipping bound than to the other two hyperparameters. Furthermore, the privacy/utility tradeoff does not improve when allowing for more training runtime. The changes to the width and depth of the model have a higher impact than other modifications of the neural architecture. Finally, the thesis discusses the impact of the findings and limitations of the experiment design and recommends directions for future work
Individual Privacy Accounting with Gaussian Differential Privacy
Individual privacy accounting enables bounding differential privacy (DP) loss
individually for each participant involved in the analysis. This can be
informative as often the individual privacy losses are considerably smaller
than those indicated by the DP bounds that are based on considering worst-case
bounds at each data access. In order to account for the individual privacy
losses in a principled manner, we need a privacy accountant for adaptive
compositions of randomised mechanisms, where the loss incurred at a given data
access is allowed to be smaller than the worst-case loss. This kind of analysis
has been carried out for the R\'enyi differential privacy (RDP) by Feldman and
Zrnic (2021), however not yet for the so-called optimal privacy accountants. We
make first steps in this direction by providing a careful analysis using the
Gaussian differential privacy which gives optimal bounds for the Gaussian
mechanism, one of the most versatile DP mechanisms. This approach is based on
determining a certain supermartingale for the hockey-stick divergence and on
extending the R\'enyi divergence-based fully adaptive composition results by
Feldman and Zrnic (2021). We also consider measuring the individual
-privacy losses using the so-called privacy loss
distributions. With the help of the Blackwell theorem, we can then make use of
the RDP analysis to construct an approximative individual
-accountant.Comment: 27 pages, 10 figure
Individual Privacy Accounting with Gaussian Differential Privacy
Individual privacy accounting enables bounding differential privacy (DP) loss individually for each participant involved in the analysis. This can be informative as often the individual privacy losses are considerably smaller than those indicated by the DP bounds that are based on considering worst-case bounds at each data access. In order to account for the individual losses in a principled manner, we need a privacy accountant for adaptive compositions of mechanisms, where the loss incurred at a given data access is allowed to be smaller than the worst-case loss. This kind of analysis has been carried out for the Rényi differential privacy by Feldman and Zrnic (2021), however not yet for the so-called optimal privacy accountants. We make first steps in this direction by providing a careful analysis using the Gaussian differential privacy which gives optimal bounds for the Gaussian mechanism, one of the most versatile DP mechanisms. This approach is based on determining a certain supermartingale for the hockey-stick divergence and on extending the Rényi divergence-based fully adaptive composition results by Feldman and Zrnic (2021). We also consider measuring the individual (ε,δ)-privacy losses using the so-called privacy loss distributions. Using the Blackwell theorem, we can then use the results of Feldman and Zrnic (2021) to construct an approximative individual (ε,δ)-accountant. We also show how to speed up the FFT-based individual DP accounting using the Plancherel theorem.Peer reviewe
PyVBMC : Efficient Bayesian inference in Python
Peer reviewe
On the Efficacy of Differentially Private Few-shot Image Classification
There has been significant recent progress in training differentially private (DP) models which achieve accuracy that approaches the best non-private models. These DP models are typically pretrained on large public datasets and then fine-tuned on private downstream datasets that are relatively large and similar in distribution to the pretraining data. However, in many applications including personalization and federated learning, it is crucial to perform well (i) in the few-shot setting, as obtaining large amounts of labeled data may be problematic; and (ii) on datasets from a wide variety of domains for use in various specialist settings. To understand under which conditions few-shot DP can be effective, we perform an exhaustive set of experiments that reveals how the accuracy and vulnerability to attack of few-shot DP image classification models are affected as the number of shots per class, privacy level, model architecture, downstream dataset, and subset of learnable parameters in the model vary. We show that to achieve DP accuracy on par with non-private models, the shots per class must be increased as the privacy level increases. We also show that learning parameter-efficient FiLM adapters under DP is competitive with learning just the final classifier layer or learning all of the network parameters. Finally, we evaluate DP federated learning systems and establish state-of-the-art performance on the challenging FLAIR benchmark.Peer reviewe