Deep neural networks (DNNs) are utilized in numerous image processing, object
detection, and video analysis tasks and need to be implemented using hardware
accelerators to achieve practical speed. Logic locking is one of the most
popular methods for preventing chip counterfeiting. Nevertheless, existing
logic-locking schemes need to sacrifice the number of input patterns leading to
wrong output under incorrect keys to resist the powerful satisfiability
(SAT)-attack. Furthermore, DNN model inference is fault-tolerant. Hence, using
a wrong key for those SAT-resistant logic-locking schemes may not affect the
accuracy of DNNs. This makes the previous SAT-resistant logic-locking scheme
ineffective on protecting DNN accelerators. Besides, to prevent DNN models from
being illegally used, the models need to be obfuscated by the designers before
they are provided to end-users. Previous obfuscation methods either require
long time to retrain the model or leak information about the model. This paper
proposes a joint protection scheme for DNN hardware accelerators and models.
The DNN accelerator is modified using a hardware key (Hkey) and a model key
(Mkey). Different from previous logic locking, the Hkey, which is used to
protect the accelerator, does not affect the output when it is wrong. As a
result, the SAT attack can be effectively resisted. On the other hand, a wrong
Hkey leads to substantial increase in memory accesses, inference time, and
energy consumption and makes the accelerator unusable. A correct Mkey can
recover the DNN model that is obfuscated by the proposed method. Compared to
previous model obfuscation schemes, our proposed method avoids model retraining
and does not leak model information