Executing deep neural networks for inference on the server-class or cloud
backend based on data generated at the edge of Internet of Things is desirable
due primarily to the limited compute power of edge devices and the need to
protect the confidentiality of the inference neural networks. However, such a
remote inference scheme incurs concerns regarding the privacy of the inference
data transmitted by the edge devices to the curious backend. This paper
presents a lightweight and unobtrusive approach to obfuscate the inference data
at the edge devices. It is lightweight in that the edge device only needs to
execute a small-scale neural network; it is unobtrusive in that the edge device
does not need to indicate whether obfuscation is applied. Extensive evaluation
by three case studies of free spoken digit recognition, handwritten digit
recognition, and American sign language recognition shows that our approach
effectively protects the confidentiality of the raw forms of the inference data
while effectively preserving the backend's inference accuracy.Comment: This paper has been accepted by IEEE Internet of Things Journal,
Special Issue on Artificial Intelligence Powered Edge Computing for Internet
of Thing