With deep learning deployed in many security-sensitive areas, machine
learning security is becoming progressively important. Recent studies
demonstrate attackers can exploit system-level techniques exploiting the
RowHammer vulnerability of DRAM to deterministically and precisely flip bits in
Deep Neural Networks (DNN) model weights to affect inference accuracy. The
existing defense mechanisms are software-based, such as weight reconstruction
requiring expensive training overhead or performance degradation. On the other
hand, generic hardware-based victim-/aggressor-focused mechanisms impose
expensive hardware overheads and preserve the spatial connection between victim
and aggressor rows. In this paper, we present the first DRAM-based
victim-focused defense mechanism tailored for quantized DNNs, named
DNN-Defender that leverages the potential of in-DRAM swapping to withstand the
targeted bit-flip attacks. Our results indicate that DNN-Defender can deliver a
high level of protection downgrading the performance of targeted RowHammer
attacks to a random attack level. In addition, the proposed defense has no
accuracy drop on CIFAR-10 and ImageNet datasets without requiring any software
training or incurring additional hardware overhead.Comment: 10 pages, 11 figure