1 research outputs found
No Peek: A Survey of private distributed deep learning
We survey distributed deep learning models for training or inference without
accessing raw data from clients. These methods aim to protect confidential
patterns in data while still allowing servers to train models. The distributed
deep learning methods of federated learning, split learning and large batch
stochastic gradient descent are compared in addition to private and secure
approaches of differential privacy, homomorphic encryption, oblivious transfer
and garbled circuits in the context of neural networks. We study their
benefits, limitations and trade-offs with regards to computational resources,
data leakage and communication efficiency and also share our anticipated future
trends.Comment: 21 page