Compute In-Memory platforms such as memristive crossbars are gaining focus as
they facilitate acceleration of Deep Neural Networks (DNNs) with high area and
compute-efficiencies. However, the intrinsic non-idealities associated with the
analog nature of computing in crossbars limits the performance of the deployed
DNNs. Furthermore, DNNs are shown to be vulnerable to adversarial attacks
leading to severe security threats in their large-scale deployment. Thus,
finding adversarially robust DNN architectures for non-ideal crossbars is
critical to the safe and secure deployment of DNNs on the edge. This work
proposes a two-phase algorithm-hardware co-optimization approach called
XploreNAS that searches for hardware-efficient & adversarially robust neural
architectures for non-ideal crossbar platforms. We use the one-shot Neural
Architecture Search (NAS) approach to train a large Supernet with
crossbar-awareness and sample adversarially robust Subnets therefrom,
maintaining competitive hardware-efficiency. Our experiments on crossbars with
benchmark datasets (SVHN, CIFAR10 & CIFAR100) show upto ~8-16% improvement in
the adversarial robustness of the searched Subnets against a baseline ResNet-18
model subjected to crossbar-aware adversarial training. We benchmark our robust
Subnets for Energy-Delay-Area-Products (EDAPs) using the Neurosim tool and find
that with additional hardware-efficiency driven optimizations, the Subnets
attain ~1.5-1.6x lower EDAPs than ResNet-18 baseline.Comment: 16 pages, 8 figures, 2 table