5 research outputs found
Bio-Inspired Multi-Layer Spiking Neural Network Extracts Discriminative Features from Speech Signals
Spiking neural networks (SNNs) enable power-efficient implementations due to
their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN
that uses unsupervised learning to extract discriminative features from speech
signals, which can subsequently be used in a classifier. The architecture
consists of a spiking convolutional/pooling layer followed by a fully connected
spiking layer for feature discovery. The convolutional layer of leaky,
integrate-and-fire (LIF) neurons represents primary acoustic features. The
fully connected layer is equipped with a probabilistic spike-timing-dependent
plasticity learning rule. This layer represents the discriminative features
through probabilistic, LIF neurons. To assess the discriminative power of the
learned features, they are used in a hidden Markov model (HMM) for spoken digit
recognition. The experimental results show performance above 96% that compares
favorably with popular statistical feature extraction methods. Our results
provide a novel demonstration of unsupervised feature acquisition in an SNN
A Spiking Neural Network Learning Markov Chain
In this paper, the question how spiking neural network (SNN) learns and fixes
in its internal structures a model of external world dynamics is explored. This
question is important for implementation of the model-based reinforcement
learning (RL), the realistic RL regime where the decisions made by SNN and
their evaluation in terms of reward/punishment signals may be separated by
significant time interval and sequence of intermediate evaluation-neutral world
states. In the present work, I formalize world dynamics as a Markov chain with
unknown a priori state transition probabilities, which should be learnt by the
network. To make this problem formulation more realistic, I solve it in
continuous time, so that duration of every state in the Markov chain may be
different and is unknown. It is demonstrated how this task can be accomplished
by an SNN with specially designed structure and local synaptic plasticity
rules. As an example, we show how this network motif works in the simple but
non-trivial world where a ball moves inside a square box and bounces from its
walls with a random new direction and velocity