Deep artificial neural networks (ANNs) can represent a wide range of complex
functions. Implementing ANNs in Von Neumann computing systems, though, incurs a
high energy cost due to the bottleneck created between CPU and memory.
Implementation on neuromorphic systems may help to reduce energy demand.
Conventional ANNs must be converted into equivalent Spiking Neural Networks
(SNNs) in order to be deployed on neuromorphic chips. This paper presents a way
to perform this translation. We map the ANN weights to SNN synapses
layer-by-layer by forming a least-square-error approximation problem at each
layer.
An optimal set of synapse weights may then be found for a given choice of ANN
activation function and SNN neuron. Using an appropriate constrained solver, we
can generate SNNs compatible with digital, analog, or hybrid chip
architectures. We present an optimal node pruning method to allow SNN layer
sizes to be set by the designer. To illustrate this process, we convert three
ANNs, including one convolutional network, to SNNs. In all three cases, a
simple linear program solver was used. The experiments show that the resulting
networks maintain agreement with the original ANN and excellent performance on
the evaluation tasks. The networks were also reduced in size with little loss
in task performance.Comment: Submitted to IEEE Symposium Series on Computational Intelligence
(SSCI) 201