Energy efficient architectures for brain inspired computing have been an active area of research
with recent advances in the field of neuroscience. Spiking neural networks (SNN) are a class of
artificial neural networks in which information is encoded in discrete spike events, closely resembling
the biological brain. Liquid State Machine (LSM) is a computational model developed in
theoretical neuroscience to describe information processing in recurrent neural circuits and can be
used to model recurrent SNNs. LSM is composed of an input, reservoir and output layers. A major
challenge in SNNs is training the network with discrete spiking events for which traditional loss
functions and optimization techniques cannot be applied directly. Spike Timing Dependent Plasticity
(STDP) is an unsupervised learning algorithm which updates synaptic weights based on time
difference between spikes of pre synaptic and post synaptic neurons. STDP is a localized learning
algorithm and induces self-organizing behaviors resulting in sparse network structures making it
a suitable choice for low cost hardware implementation. SNNs are hardware friendly as presence
or absence of a spike can be encoded using a binary digit. In this research, SNN processor with
energy efficient architecture is developed and is implemented on Xilinx Zynq ZC706 FPGA platform.
Hardware friendly learning rules based on STDP are proposed and reservoir and readout
layers are trained with these learning algorithms. In order to achieve energy efficiency, sparsification
algorithm utilizing STDP rule is proposed and implemented. On chip training and inference
are carried out and it is shown that with the proposed unsupervised STDP for reservoir training
and supervised STDP for readout training, classification performance of 95% is achieved for TI
corpus speech data set. Classification performance, hardware overhead and power consumption of
the processor with different learning schemes are reported