7 research outputs found
Accurate deep neural network inference using computational phase-change memory
In-memory computing is a promising non-von Neumann approach for making
energy-efficient deep learning inference hardware. Crossbar arrays of resistive
memory devices can be used to encode the network weights and perform efficient
analog matrix-vector multiplications without intermediate movements of data.
However, due to device variability and noise, the network needs to be trained
in a specific way so that transferring the digitally trained weights to the
analog resistive memory devices will not result in significant loss of
accuracy. Here, we introduce a methodology to train ResNet-type convolutional
neural networks that results in no appreciable accuracy loss when transferring
weights to in-memory computing hardware based on phase-change memory (PCM). We
also propose a compensation technique that exploits the batch normalization
parameters to improve the accuracy retention over time. We achieve a
classification accuracy of 93.7% on the CIFAR-10 dataset and a top-1 accuracy
on the ImageNet benchmark of 71.6% after mapping the trained weights to PCM.
Our hardware results on CIFAR-10 with ResNet-32 demonstrate an accuracy above
93.5% retained over a one day period, where each of the 361,722 synaptic
weights of the network is programmed on just two PCM devices organized in a
differential configuration.Comment: This is a pre-print of an article accepted for publication in Nature
Communication