8,177 research outputs found
Temporal ordering of input modulates connectivity formation in a developmental neuronal network model of the cortex
Preterm infant brain activity is discontinuous; bursts of activity recorded using EEG (electroencephalography), thought to be driven by subcortical regions, display scale free properties and exhibit a complex temporal ordering known as long-range temporal correlations (LRTCs). During brain development, activity-dependent mechanisms are essential for synaptic connectivity formation, and abolishing burst activity in animal models leads to weak disorganised synaptic connectivity. Moreover, synaptic pruning shares similar mechanisms to spike-timing dependent plasticity (STDP), suggesting that the timing of activity may play a critical role in connectivity formation. We investigated, in a computational model of leaky integrate-and-fire neurones, whether the temporal ordering of burst activity within an external driving input could modulate connectivity formation in the network. Connectivity evolved across the course of simulations using an approach analogous to STDP, from networks with initial random connectivity. Small-world connectivity and hub neurones emerged in the network structure—characteristic properties of mature brain networks. Notably, driving the network with an external input which exhibited LRTCs in the temporal ordering of burst activity facilitated the emergence of these network properties, increasing the speed with which they emerged compared with when the network was driven by the same input with the bursts randomly ordered in time. Moreover, the emergence of small-world properties was dependent on the strength of the LRTCs. These results suggest that the temporal ordering of burst activity could play an important role in synaptic connectivity formation and the emergence of small-world topology in the developing brain
Storage capacity of phase-coded patterns in sparse neural networks
We study the storage of multiple phase-coded patterns as stable dynamical
attractors in recurrent neural networks with sparse connectivity. To determine
the synaptic strength of existent connections and store the phase-coded
patterns, we introduce a learning rule inspired to the spike-timing dependent
plasticity (STDP). We find that, after learning, the spontaneous dynamics of
the network replay one of the stored dynamical patterns, depending on the
network initialization. We study the network capacity as a function of
topology, and find that a small- world-like topology may be optimal, as a
compromise between the high wiring cost of long range connections and the
capacity increase.Comment: Accepted for publication in Europhysics Letter
Storage of phase-coded patterns via STDP in fully-connected and sparse network: a study of the network capacity
We study the storage and retrieval of phase-coded patterns as stable
dynamical attractors in recurrent neural networks, for both an analog and a
integrate-and-fire spiking model. The synaptic strength is determined by a
learning rule based on spike-time-dependent plasticity, with an asymmetric time
window depending on the relative timing between pre- and post-synaptic
activity. We store multiple patterns and study the network capacity.
For the analog model, we find that the network capacity scales linearly with
the network size, and that both capacity and the oscillation frequency of the
retrieval state depend on the asymmetry of the learning time window. In
addition to fully-connected networks, we study sparse networks, where each
neuron is connected only to a small number z << N of other neurons. Connections
can be short range, between neighboring neurons placed on a regular lattice, or
long range, between randomly chosen pairs of neurons. We find that a small
fraction of long range connections is able to amplify the capacity of the
network. This imply that a small-world-network topology is optimal, as a
compromise between the cost of long range connections and the capacity
increase.
Also in the spiking integrate and fire model the crucial result of storing
and retrieval of multiple phase-coded patterns is observed. The capacity of the
fully-connected spiking network is investigated, together with the relation
between oscillation frequency of retrieval state and window asymmetry
Design and Implementation of BCM Rule Based on Spike-Timing Dependent Plasticity
The Bienenstock-Cooper-Munro (BCM) and Spike Timing-Dependent Plasticity
(STDP) rules are two experimentally verified form of synaptic plasticity where
the alteration of synaptic weight depends upon the rate and the timing of pre-
and post-synaptic firing of action potentials, respectively. Previous studies
have reported that under specific conditions, i.e. when a random train of
Poissonian distributed spikes are used as inputs, and weight changes occur
according to STDP, it has been shown that the BCM rule is an emergent property.
Here, the applied STDP rule can be either classical pair-based STDP rule, or
the more powerful triplet-based STDP rule. In this paper, we demonstrate the
use of two distinct VLSI circuit implementations of STDP to examine whether BCM
learning is an emergent property of STDP. These circuits are stimulated with
random Poissonian spike trains. The first circuit implements the classical
pair-based STDP, while the second circuit realizes a previously described
triplet-based STDP rule. These two circuits are simulated using 0.35 um CMOS
standard model in HSpice simulator. Simulation results demonstrate that the
proposed triplet-based STDP circuit significantly produces the threshold-based
behaviour of the BCM. Also, the results testify to similar behaviour for the
VLSI circuit for pair-based STDP in generating the BCM
Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition
A neuromorphic chip that combines CMOS analog spiking neurons and memristive
synapses offers a promising solution to brain-inspired computing, as it can
provide massive neural network parallelism and density. Previous hybrid analog
CMOS-memristor approaches required extensive CMOS circuitry for training, and
thus eliminated most of the density advantages gained by the adoption of
memristor synapses. Further, they used different waveforms for pre and
post-synaptic spikes that added undesirable circuit overhead. Here we describe
a hardware architecture that can feature a large number of memristor synapses
to learn real-world patterns. We present a versatile CMOS neuron that combines
integrate-and-fire behavior, drives passive memristors and implements
competitive learning in a compact circuit module, and enables in-situ
plasticity in the memristor synapses. We demonstrate handwritten-digits
recognition using the proposed architecture using transistor-level circuit
simulations. As the described neuromorphic architecture is homogeneous, it
realizes a fundamental building block for large-scale energy-efficient
brain-inspired silicon chips that could lead to next-generation cognitive
computing.Comment: This is a preprint of an article accepted for publication in IEEE
Journal on Emerging and Selected Topics in Circuits and Systems, vol 5, no.
2, June 201
- …