4,439 research outputs found
SIMPEL: Circuit model for photonic spike processing laser neurons
We propose an equivalent circuit model for photonic spike processing laser
neurons with an embedded saturable absorber---a simulation model for photonic
excitable lasers (SIMPEL). We show that by mapping the laser neuron rate
equations into a circuit model, SPICE analysis can be used as an efficient and
accurate engine for numerical calculations, capable of generalization to a
variety of different laser neuron types found in literature. The development of
this model parallels the Hodgkin--Huxley model of neuron biophysics, a circuit
framework which brought efficiency, modularity, and generalizability to the
study of neural dynamics. We employ the model to study various
signal-processing effects such as excitability with excitatory and inhibitory
pulses, binary all-or-nothing response, and bistable dynamics.Comment: 16 pages, 7 figure
The iso-response method
Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments
Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Spiking neural networks (SNN) are artificial computational models that have
been inspired by the brain's ability to naturally encode and process
information in the time domain. The added temporal dimension is believed to
render them more computationally efficient than the conventional artificial
neural networks, though their full computational capabilities are yet to be
explored. Recently, computational memory architectures based on non-volatile
memory crossbar arrays have shown great promise to implement parallel
computations in artificial and spiking neural networks. In this work, we
experimentally demonstrate for the first time, the feasibility to realize
high-performance event-driven in-situ supervised learning systems using
nanoscale and stochastic phase-change synapses. Our SNN is trained to recognize
audio signals of alphabets encoded using spikes in the time domain and to
generate spike trains at precise time instances to represent the pixel
intensities of their corresponding images. Moreover, with a statistical model
capturing the experimental behavior of the devices, we investigate
architectural and systems-level solutions for improving the training and
inference performance of our computational memory-based system. Combining the
computational potential of supervised SNNs with the parallel compute power of
computational memory, the work paves the way for next-generation of efficient
brain-inspired systems
Demonstrating Advantages of Neuromorphic Computation: A Pilot Study
Neuromorphic devices represent an attempt to mimic aspects of the brain's
architecture and dynamics with the aim of replicating its hallmark functional
capabilities in terms of computational power, robust learning and energy
efficiency. We employ a single-chip prototype of the BrainScaleS 2 neuromorphic
system to implement a proof-of-concept demonstration of reward-modulated
spike-timing-dependent plasticity in a spiking network that learns to play the
Pong video game by smooth pursuit. This system combines an electronic
mixed-signal substrate for emulating neuron and synapse dynamics with an
embedded digital processor for on-chip learning, which in this work also serves
to simulate the virtual environment and learning agent. The analog emulation of
neuronal membrane dynamics enables a 1000-fold acceleration with respect to
biological real-time, with the entire chip operating on a power budget of 57mW.
Compared to an equivalent simulation using state-of-the-art software, the
on-chip emulation is at least one order of magnitude faster and three orders of
magnitude more energy-efficient. We demonstrate how on-chip learning can
mitigate the effects of fixed-pattern noise, which is unavoidable in analog
substrates, while making use of temporal variability for action exploration.
Learning compensates imperfections of the physical substrate, as manifested in
neuronal parameter variability, by adapting synaptic weights to match
respective excitability of individual neurons.Comment: Added measurements with noise in NEST simulation, add notice about
journal publication. Frontiers in Neuromorphic Engineering (2019
Feature detection using spikes: the greedy approach
A goal of low-level neural processes is to build an efficient code extracting
the relevant information from the sensory input. It is believed that this is
implemented in cortical areas by elementary inferential computations
dynamically extracting the most likely parameters corresponding to the sensory
signal. We explore here a neuro-mimetic feed-forward model of the primary
visual area (VI) solving this problem in the case where the signal may be
described by a robust linear generative model. This model uses an over-complete
dictionary of primitives which provides a distributed probabilistic
representation of input features. Relying on an efficiency criterion, we derive
an algorithm as an approximate solution which uses incremental greedy inference
processes. This algorithm is similar to 'Matching Pursuit' and mimics the
parallel architecture of neural computations. We propose here a simple
implementation using a network of spiking integrate-and-fire neurons which
communicate using lateral interactions. Numerical simulations show that this
Sparse Spike Coding strategy provides an efficient model for representing
visual data from a set of natural images. Even though it is simplistic, this
transformation of spatial data into a spatio-temporal pattern of binary events
provides an accurate description of some complex neural patterns observed in
the spiking activity of biological neural networks.Comment: This work links Matching Pursuit with bayesian inference by providing
the underlying hypotheses (linear model, uniform prior, gaussian noise
model). A parallel with the parallel and event-based nature of neural
computations is explored and we show application to modelling Primary Visual
Cortex / image processsing.
http://incm.cnrs-mrs.fr/perrinet/dynn/LaurentPerrinet/Publications/Perrinet04tau
- …