257 research outputs found
Theory of spike timing based neural classifiers
We study the computational capacity of a model neuron, the Tempotron, which
classifies sequences of spikes by linear-threshold operations. We use
statistical mechanics and extreme value theory to derive the capacity of the
system in random classification tasks. In contrast to its static analog, the
Perceptron, the Tempotron's solutions space consists of a large number of small
clusters of weight vectors. The capacity of the system per synapse is finite in
the large size limit and weakly diverges with the stimulus duration relative to
the membrane and synaptic time constants.Comment: 4 page, 4 figures, Accepted to Physical Review Letters on 19th Oct.
201
Pulse shape discrimination based on the Tempotron: a powerful classifier on GPU
This study introduces the Tempotron, a powerful classifier based on a
third-generation neural network model, for pulse shape discrimination. By
eliminating the need for manual feature extraction, the Tempotron model can
process pulse signals directly, generating discrimination results based on
learned prior knowledge. The study performed experiments using GPU
acceleration, resulting in over a 500 times speedup compared to the CPU-based
model, and investigated the impact of noise augmentation on the Tempotron's
performance. Experimental results showed that the Tempotron is a potent
classifier capable of achieving high discrimination accuracy. Furthermore,
analyzing the neural activity of Tempotron during training shed light on its
learning characteristics and aided in selecting the Tempotron's
hyperparameters. The dataset used in this study and the source code of the
GPU-based Tempotron are publicly available on GitHub at
https://github.com/HaoranLiu507/TempotronGPU.Comment: 14 pages,7 figure
The chronotron: a neuron that learns to fire temporally-precise spike patterns
In many cases, neurons process information carried by the precise timing of spikes. Here we show how neurons can learn to generate specific temporally-precise output spikes in response to input spike patterns, thus processing and memorizing information that is fully temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that is analytically-derived and highly efficient, and one that has a high degree of biological plausibility. We show how chronotrons can learn to classify their inputs and we study their memory capacity
Six networks on a universal neuromorphic computing substrate
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality
Revisiting chaos in stimulus-driven spiking networks: signal encoding and discrimination
Highly connected recurrent neural networks often produce chaotic dynamics,
meaning their precise activity is sensitive to small perturbations. What are
the consequences for how such networks encode streams of temporal stimuli? On
the one hand, chaos is a strong source of randomness, suggesting that small
changes in stimuli will be obscured by intrinsically generated variability. On
the other hand, recent work shows that the type of chaos that occurs in spiking
networks can have a surprisingly low-dimensional structure, suggesting that
there may be "room" for fine stimulus features to be precisely resolved. Here
we show that strongly chaotic networks produce patterned spikes that reliably
encode time-dependent stimuli: using a decoder sensitive to spike times on
timescales of 10's of ms, one can easily distinguish responses to very similar
inputs. Moreover, recurrence serves to distribute signals throughout chaotic
networks so that small groups of cells can encode substantial information about
signals arriving elsewhere. A conclusion is that the presence of strong chaos
in recurrent networks does not prohibit precise stimulus encoding.Comment: 8 figure
Recommended from our members
Computing Complex Visual Features with Retinal Spike Times
Neurons in sensory systems can represent information not only by their firing rate, but also by the precise timing of individual spikes. For example, certain retinal ganglion cells, first identified in the salamander, encode the spatial structure of a new image by their first-spike latencies. Here we explore how this temporal code can be used by downstream neural circuits for computing complex features of the image that are not available from the signals of individual ganglion cells. To this end, we feed the experimentally observed spike trains from a population of retinal ganglion cells to an integrate-and-fire model of post-synaptic integration. The synaptic weights of this integration are tuned according to the recently introduced tempotron learning rule. We find that this model neuron can perform complex visual detection tasks in a single synaptic stage that would require multiple stages for neurons operating instead on neural spike counts. Furthermore, the model computes rapidly, using only a single spike per afferent, and can signal its decision in turn by just a single spike. Extending these analyses to large ensembles of simulated retinal signals, we show that the model can detect the orientation of a visual pattern independent of its phase, an operation thought to be one of the primitives in early visual processing. We analyze how these computations work and compare the performance of this model to other schemes for reading out spike-timing information. These results demonstrate that the retina formats spatial information into temporal spike sequences in a way that favors computation in the time domain. Moreover, complex image analysis can be achieved already by a simple integrate-and-fire model neuron, emphasizing the power and plausibility of rapid neural computing with spike times.Molecular and Cellular Biolog
- …