Recently, deep learning as a service (DLaaS) has emerged as a promising way
to facilitate the employment of deep neural networks (DNNs) for various
purposes. However, using DLaaS also causes potential privacy leakage from both
clients and cloud servers. This privacy issue has fueled the research interests
on the privacy-preserving inference of DNN models in the cloud service. In this
paper, we present a practical solution named BAYHENN for secure DNN inference.
It can protect both the client's privacy and server's privacy at the same time.
The key strategy of our solution is to combine homomorphic encryption and
Bayesian neural networks. Specifically, we use homomorphic encryption to
protect a client's raw data and use Bayesian neural networks to protect the DNN
weights in a cloud server. To verify the effectiveness of our solution, we
conduct experiments on MNIST and a real-life clinical dataset. Our solution
achieves consistent latency decreases on both tasks. In particular, our method
can outperform the best existing method (GAZELLE) by about 5x, in terms of
end-to-end latency.Comment: accepted by IJCAI 2019; camera read