7,935 research outputs found
On the Interaction Between Differential Privacy and Gradient Compression in Deep Learning
While differential privacy and gradient compression are separately
well-researched topics in machine learning, the study of interaction between
these two topics is still relatively new. We perform a detailed empirical study
on how the Gaussian mechanism for differential privacy and gradient compression
jointly impact test accuracy in deep learning. The existing literature in
gradient compression mostly evaluates compression in the absence of
differential privacy guarantees, and demonstrate that sufficiently high
compression rates reduce accuracy. Similarly, existing literature in
differential privacy evaluates privacy mechanisms in the absence of
compression, and demonstrates that sufficiently strong privacy guarantees
reduce accuracy. In this work, we observe while gradient compression generally
has a negative impact on test accuracy in non-private training, it can
sometimes improve test accuracy in differentially private training.
Specifically, we observe that when employing aggressive sparsification or rank
reduction to the gradients, test accuracy is less affected by the Gaussian
noise added for differential privacy. These observations are explained through
an analysis how differential privacy and compression effects the bias and
variance in estimating the average gradient. We follow this study with a
recommendation on how to improve test accuracy under the context of
differentially private deep learning and gradient compression. We evaluate this
proposal and find that it can reduce the negative impact of noise added by
differential privacy mechanisms on test accuracy by up to 24.6%, and reduce the
negative impact of gradient sparsification on test accuracy by up to 15.1%
Differentially Private Sharpness-Aware Training
Training deep learning models with differential privacy (DP) results in a
degradation of performance. The training dynamics of models with DP show a
significant difference from standard training, whereas understanding the
geometric properties of private learning remains largely unexplored. In this
paper, we investigate sharpness, a key factor in achieving better
generalization, in private learning. We show that flat minima can help reduce
the negative effects of per-example gradient clipping and the addition of
Gaussian noise. We then verify the effectiveness of Sharpness-Aware
Minimization (SAM) for seeking flat minima in private learning. However, we
also discover that SAM is detrimental to the privacy budget and computational
time due to its two-step optimization. Thus, we propose a new sharpness-aware
training method that mitigates the privacy-optimization trade-off. Our
experimental results demonstrate that the proposed method improves the
performance of deep learning models with DP from both scratch and fine-tuning.
Code is available at https://github.com/jinseongP/DPSAT.Comment: ICML 202
On Lightweight Privacy-Preserving Collaborative Learning for IoT Objects
The Internet of Things (IoT) will be a main data generation infrastructure
for achieving better system intelligence. This paper considers the design and
implementation of a practical privacy-preserving collaborative learning scheme,
in which a curious learning coordinator trains a better machine learning model
based on the data samples contributed by a number of IoT objects, while the
confidentiality of the raw forms of the training data is protected against the
coordinator. Existing distributed machine learning and data encryption
approaches incur significant computation and communication overhead, rendering
them ill-suited for resource-constrained IoT objects. We study an approach that
applies independent Gaussian random projection at each IoT object to obfuscate
data and trains a deep neural network at the coordinator based on the projected
data from the IoT objects. This approach introduces light computation overhead
to the IoT objects and moves most workload to the coordinator that can have
sufficient computing resources. Although the independent projections performed
by the IoT objects address the potential collusion between the curious
coordinator and some compromised IoT objects, they significantly increase the
complexity of the projected data. In this paper, we leverage the superior
learning capability of deep learning in capturing sophisticated patterns to
maintain good learning performance. Extensive comparative evaluation shows that
this approach outperforms other lightweight approaches that apply additive
noisification for differential privacy and/or support vector machines for
learning in the applications with light data pattern complexities.Comment: 12 pages,IOTDI 201
- …