3,341 research outputs found
Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition
A neuromorphic chip that combines CMOS analog spiking neurons and memristive
synapses offers a promising solution to brain-inspired computing, as it can
provide massive neural network parallelism and density. Previous hybrid analog
CMOS-memristor approaches required extensive CMOS circuitry for training, and
thus eliminated most of the density advantages gained by the adoption of
memristor synapses. Further, they used different waveforms for pre and
post-synaptic spikes that added undesirable circuit overhead. Here we describe
a hardware architecture that can feature a large number of memristor synapses
to learn real-world patterns. We present a versatile CMOS neuron that combines
integrate-and-fire behavior, drives passive memristors and implements
competitive learning in a compact circuit module, and enables in-situ
plasticity in the memristor synapses. We demonstrate handwritten-digits
recognition using the proposed architecture using transistor-level circuit
simulations. As the described neuromorphic architecture is homogeneous, it
realizes a fundamental building block for large-scale energy-efficient
brain-inspired silicon chips that could lead to next-generation cognitive
computing.Comment: This is a preprint of an article accepted for publication in IEEE
Journal on Emerging and Selected Topics in Circuits and Systems, vol 5, no.
2, June 201
Beta-rhythm oscillations and synchronization transition in network models of Izhikevich neurons: effect of topology and synaptic type
Despite their significant functional roles, beta-band oscillations are least
understood. Synchronization in neuronal networks have attracted much attention
in recent years with the main focus on transition type. Whether one obtains
explosive transition or a continuous transition is an important feature of the
neuronal network which can depend on network structure as well as synaptic
types. In this study we consider the effect of synaptic interaction (electrical
and chemical) as well as structural connectivity on synchronization transition
in network models of Izhikevich neurons which spike regularly with beta
rhythms. We find a wide range of behavior including continuous transition,
explosive transition, as well as lack of global order. The stronger electrical
synapses are more conducive to synchronization and can even lead to explosive
synchronization. The key network element which determines the order of
transition is found to be the clustering coefficient and not the small world
effect, or the existence of hubs in a network. These results are in contrast to
previous results which use phase oscillator models such as the Kuramoto model.
Furthermore, we show that the patterns of synchronization changes when one goes
to the gamma band. We attribute such a change to the change in the refractory
period of Izhikevich neurons which changes significantly with frequency.Comment: 7 figures, 1 tabl
The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study
High-level brain function such as memory, classification or reasoning can be
realized by means of recurrent networks of simplified model neurons. Analog
neuromorphic hardware constitutes a fast and energy efficient substrate for the
implementation of such neural computing architectures in technical applications
and neuroscientific research. The functional performance of neural networks is
often critically dependent on the level of correlations in the neural activity.
In finite networks, correlations are typically inevitable due to shared
presynaptic input. Recent theoretical studies have shown that inhibitory
feedback, abundant in biological neural networks, can actively suppress these
shared-input correlations and thereby enable neurons to fire nearly
independently. For networks of spiking neurons, the decorrelating effect of
inhibitory feedback has so far been explicitly demonstrated only for
homogeneous networks of neurons with linear sub-threshold dynamics. Theory,
however, suggests that the effect is a general phenomenon, present in any
system with sufficient inhibitory feedback, irrespective of the details of the
network structure or the neuronal and synaptic properties. Here, we investigate
the effect of network heterogeneity on correlations in sparse, random networks
of inhibitory neurons with non-linear, conductance-based synapses. Emulations
of these networks on the analog neuromorphic hardware system Spikey allow us to
test the efficiency of decorrelation by inhibitory feedback in the presence of
hardware-specific heterogeneities. The configurability of the hardware
substrate enables us to modulate the extent of heterogeneity in a systematic
manner. We selectively study the effects of shared input and recurrent
connections on correlations in membrane potentials and spike trains. Our
results confirm ...Comment: 20 pages, 10 figures, supplement
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
Spiking Neural P Systems with Addition/Subtraction Computing on Synapses
Spiking neural P systems (SN P systems, for short) are a class of distributed
and parallel computing models inspired from biological spiking neurons. In this paper,
we introduce a variant called SN P systems with addition/subtraction computing on
synapses (CSSN P systems). CSSN P systems are inspired and motivated by the shunting
inhibition of biological synapses, while incorporating ideas from dynamic graphs and
networks. We consider addition and subtraction operations on synapses, and prove that
CSSN P systems are computationally universal as number generators, under a normal
form (i.e. a simplifying set of restrictions)
Community detection with spiking neural networks for neuromorphic hardware
We present results related to the performance of an algorithm for community
detection which incorporates event-driven computation. We define a mapping
which takes a graph G to a system of spiking neurons. Using a fully connected
spiking neuron system, with both inhibitory and excitatory synaptic
connections, the firing patterns of neurons within the same community can be
distinguished from firing patterns of neurons in different communities. On a
random graph with 128 vertices and known community structure we show that by
using binary decoding and a Hamming-distance based metric, individual
communities can be identified from spike train similarities. Using bipolar
decoding and finite rate thresholding, we verify that inhibitory connections
prevent the spread of spiking patterns.Comment: Conference paper presented at ORNL Neuromorphic Workshop 2017, 7
pages, 6 figure
Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface
Experiments show that spike-triggered stimulation performed with
Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen
connections between separate neural sites in motor cortex (MC). What are the
neuronal mechanisms responsible for these changes and how does targeted
stimulation by a BBCI shape population-level synaptic connectivity? The present
work describes a recurrent neural network model with probabilistic spiking
mechanisms and plastic synapses capable of capturing both neural and synaptic
activity statistics relevant to BBCI conditioning protocols. When spikes from a
neuron recorded at one MC site trigger stimuli at a second target site after a
fixed delay, the connections between sites are strengthened for spike-stimulus
delays consistent with experimentally derived spike time dependent plasticity
(STDP) rules. However, the relationship between STDP mechanisms at the level of
networks, and their modification with neural implants remains poorly
understood. Using our model, we successfully reproduces key experimental
results and use analytical derivations, along with novel experimental data. We
then derive optimal operational regimes for BBCIs, and formulate predictions
concerning the efficacy of spike-triggered stimulation in different regimes of
cortical activity.Comment: 35 pages, 9 figure
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
Neuromorphic chips embody computational principles operating in the nervous
system, into microelectronic devices. In this domain it is important to
identify computational primitives that theory and experiments suggest as
generic and reusable cognitive elements. One such element is provided by
attractor dynamics in recurrent networks. Point attractors are equilibrium
states of the dynamics (up to fluctuations), determined by the synaptic
structure of the network; a `basin' of attraction comprises all initial states
leading to a given attractor upon relaxation, hence making attractor dynamics
suitable to implement robust associative memory. The initial network state is
dictated by the stimulus, and relaxation to the attractor state implements the
retrieval of the corresponding memorized prototypical pattern. In a previous
work we demonstrated that a neuromorphic recurrent network of spiking neurons
and suitably chosen, fixed synapses supports attractor dynamics. Here we focus
on learning: activating on-chip synaptic plasticity and using a theory-driven
strategy for choosing network parameters, we show that autonomous learning,
following repeated presentation of simple visual stimuli, shapes a synaptic
connectivity supporting stimulus-selective attractors. Associative memory
develops on chip as the result of the coupled stimulus-driven neural activity
and ensuing synaptic dynamics, with no artificial separation between learning
and retrieval phases.Comment: submitted to Scientific Repor
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
- …