1,358 research outputs found
SIMPEL: Circuit model for photonic spike processing laser neurons
We propose an equivalent circuit model for photonic spike processing laser
neurons with an embedded saturable absorber---a simulation model for photonic
excitable lasers (SIMPEL). We show that by mapping the laser neuron rate
equations into a circuit model, SPICE analysis can be used as an efficient and
accurate engine for numerical calculations, capable of generalization to a
variety of different laser neuron types found in literature. The development of
this model parallels the Hodgkin--Huxley model of neuron biophysics, a circuit
framework which brought efficiency, modularity, and generalizability to the
study of neural dynamics. We employ the model to study various
signal-processing effects such as excitability with excitatory and inhibitory
pulses, binary all-or-nothing response, and bistable dynamics.Comment: 16 pages, 7 figure
Feature detection using spikes: the greedy approach
A goal of low-level neural processes is to build an efficient code extracting
the relevant information from the sensory input. It is believed that this is
implemented in cortical areas by elementary inferential computations
dynamically extracting the most likely parameters corresponding to the sensory
signal. We explore here a neuro-mimetic feed-forward model of the primary
visual area (VI) solving this problem in the case where the signal may be
described by a robust linear generative model. This model uses an over-complete
dictionary of primitives which provides a distributed probabilistic
representation of input features. Relying on an efficiency criterion, we derive
an algorithm as an approximate solution which uses incremental greedy inference
processes. This algorithm is similar to 'Matching Pursuit' and mimics the
parallel architecture of neural computations. We propose here a simple
implementation using a network of spiking integrate-and-fire neurons which
communicate using lateral interactions. Numerical simulations show that this
Sparse Spike Coding strategy provides an efficient model for representing
visual data from a set of natural images. Even though it is simplistic, this
transformation of spatial data into a spatio-temporal pattern of binary events
provides an accurate description of some complex neural patterns observed in
the spiking activity of biological neural networks.Comment: This work links Matching Pursuit with bayesian inference by providing
the underlying hypotheses (linear model, uniform prior, gaussian noise
model). A parallel with the parallel and event-based nature of neural
computations is explored and we show application to modelling Primary Visual
Cortex / image processsing.
http://incm.cnrs-mrs.fr/perrinet/dynn/LaurentPerrinet/Publications/Perrinet04tau
Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Spiking neural networks (SNN) are artificial computational models that have
been inspired by the brain's ability to naturally encode and process
information in the time domain. The added temporal dimension is believed to
render them more computationally efficient than the conventional artificial
neural networks, though their full computational capabilities are yet to be
explored. Recently, computational memory architectures based on non-volatile
memory crossbar arrays have shown great promise to implement parallel
computations in artificial and spiking neural networks. In this work, we
experimentally demonstrate for the first time, the feasibility to realize
high-performance event-driven in-situ supervised learning systems using
nanoscale and stochastic phase-change synapses. Our SNN is trained to recognize
audio signals of alphabets encoded using spikes in the time domain and to
generate spike trains at precise time instances to represent the pixel
intensities of their corresponding images. Moreover, with a statistical model
capturing the experimental behavior of the devices, we investigate
architectural and systems-level solutions for improving the training and
inference performance of our computational memory-based system. Combining the
computational potential of supervised SNNs with the parallel compute power of
computational memory, the work paves the way for next-generation of efficient
brain-inspired systems
The impact of spike timing variability on the signal-encoding performance of neural spiking models
It remains unclear whether the variability of neuronal spike trains in vivo arises due to biological noise sources or represents highly precise encoding of temporally varying synaptic input signals. Determining the variability of spike timing can provide fundamental insights into the nature of strategies used in the brain to represent and transmit information in the form of discrete spike trains. In this study, we employ a signal estimation paradigm to determine how variability in spike timing affects encoding of random time-varying signals. We assess this for two types of spiking models: an integrate-and-fire model with random threshold and a more biophysically realistic stochastic ion channel model. Using the coding fraction and mutual information as information-theoretic measures, we quantify the efficacy of optimal linear decoding of random inputs from the model outputs and study the relationship between efficacy and variability in the output spike train. Our findings suggest that variability does not necessarily hinder signal decoding for the biophysically plausible encoders examined and that the functional role of spiking variability depends intimately on the nature of the encoder and the signal processing task; variability can either enhance or impede decoding performance
Fractionally Predictive Spiking Neurons
Recent experimental work has suggested that the neural firing rate can be
interpreted as a fractional derivative, at least when signal variation induces
neural adaptation. Here, we show that the actual neural spike-train itself can
be considered as the fractional derivative, provided that the neural signal is
approximated by a sum of power-law kernels. A simple standard thresholding
spiking neuron suffices to carry out such an approximation, given a suitable
refractory response. Empirically, we find that the online approximation of
signals with a sum of power-law kernels is beneficial for encoding signals with
slowly varying components, like long-memory self-similar signals. For such
signals, the online power-law kernel approximation typically required less than
half the number of spikes for similar SNR as compared to sums of similar but
exponentially decaying kernels. As power-law kernels can be accurately
approximated using sums or cascades of weighted exponentials, we demonstrate
that the corresponding decoding of spike-trains by a receiving neuron allows
for natural and transparent temporal signal filtering by tuning the weights of
the decoding kernel.Comment: 13 pages, 5 figures, in Advances in Neural Information Processing
201
Dynamic Adaptive Computation: Tuning network states to task requirements
Neural circuits are able to perform computations under very diverse
conditions and requirements. The required computations impose clear constraints
on their fine-tuning: a rapid and maximally informative response to stimuli in
general requires decorrelated baseline neural activity. Such network dynamics
is known as asynchronous-irregular. In contrast, spatio-temporal integration of
information requires maintenance and transfer of stimulus information over
extended time periods. This can be realized at criticality, a phase transition
where correlations, sensitivity and integration time diverge. Being able to
flexibly switch, or even combine the above properties in a task-dependent
manner would present a clear functional advantage. We propose that cortex
operates in a "reverberating regime" because it is particularly favorable for
ready adaptation of computational properties to context and task. This
reverberating regime enables cortical networks to interpolate between the
asynchronous-irregular and the critical state by small changes in effective
synaptic strength or excitation-inhibition ratio. These changes directly adapt
computational properties, including sensitivity, amplification, integration
time and correlation length within the local network. We review recent
converging evidence that cortex in vivo operates in the reverberating regime,
and that various cortical areas have adapted their integration times to
processing requirements. In addition, we propose that neuromodulation enables a
fine-tuning of the network, so that local circuits can either decorrelate or
integrate, and quench or maintain their input depending on task. We argue that
this task-dependent tuning, which we call "dynamic adaptive computation",
presents a central organization principle of cortical networks and discuss
first experimental evidence.Comment: 6 pages + references, 2 figure
- …