1,285 research outputs found
Pairwise Analysis Can Account for Network Structures Arising from Spike-Timing Dependent Plasticity
Spike timing-dependent plasticity (STDP) modifies synaptic strengths based on timing information available locally at each synapse. Despite this, it induces global structures within a recurrently connected network. We study such structures both through simulations and by analyzing the effects of STDP on pair-wise interactions of neurons. We show how conventional STDP acts as a loop-eliminating mechanism and organizes neurons into in- and out-hubs. Loop-elimination increases when depression dominates and turns into loop-generation when potentiation dominates. STDP with a shifted temporal window such that coincident spikes cause depression enhances recurrent connections and functions as a strict buffering mechanism that maintains a roughly constant average firing rate. STDP with the opposite temporal shift functions as a loop eliminator at low rates and as a potent loop generator at higher rates. In general, studying pairwise interactions of neurons provides important insights about the structures that STDP can produce in large networks
STDP in Recurrent Neuronal Networks
Recent results about spike-timing-dependent plasticity (STDP) in recurrently connected neurons are reviewed, with a focus on the relationship between the weight dynamics and the emergence of network structure. In particular, the evolution of synaptic weights in the two cases of incoming connections for a single neuron and recurrent connections are compared and contrasted. A theoretical framework is used that is based upon Poisson neurons with a temporally inhomogeneous firing rate and the asymptotic distribution of weights generated by the learning dynamics. Different network configurations examined in recent studies are discussed and an overview of the current understanding of STDP in recurrently connected neuronal networks is presented
Emergence of Functional Specificity in Balanced Networks with Synaptic Plasticity
In rodent visual cortex, synaptic connections between orientation-selective neurons are unspecific at the time of eye opening, and become to some degree functionally specific only later during development. An explanation for this two-stage process was proposed in terms of Hebbian plasticity based on visual experience that would eventually enhance connections between neurons with similar response features. For this to work, however, two conditions must be satisfied: First, orientation selective neuronal responses must exist before specific recurrent synaptic connections can be established. Second, Hebbian learning must be compatible with the recurrent network dynamics contributing to orientation selectivity, and the resulting specific connectivity must remain stable for unspecific background activity. Previous studies have mainly focused on very simple models, where the receptive fields of neurons were essentially determined by feedforward mechanisms, and where the recurrent network was small, lacking the complex recurrent dynamics of large-scale networks of excitatory and inhibitory neurons. Here we studied the emergence of functionally specific connectivity in large-scale recurrent networks with synaptic plasticity. Our results show that balanced random networks, which already exhibit highly selective responses at eye opening, can develop feature-specific connectivity if appropriate rules of synaptic plasticity are invoked within and between excitatory and inhibitory populations. If these conditions are met, the initial orientation selectivity guides the process of Hebbian learning and, as a result, functionally specific and a surplus of bidirectional connections emerge. Our results thus demonstrate the cooperation of synaptic plasticity and recurrent dynamics in large-scale functional networks with realistic receptive fields, highlight the role of inhibition as a critical element in this process, and paves the road for further computational studies of sensory processing in neocortical network models equipped with synaptic plasticity
Decorrelation of neural-network activity by inhibitory feedback
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent theoretical and experimental studies demonstrate that spike
correlations in recurrent neural networks are considerably smaller than
expected based on the amount of shared presynaptic input. By means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons,
we show that shared-input correlations are efficiently suppressed by inhibitory
feedback. To elucidate the effect of feedback, we compare the responses of the
intact recurrent network and systems where the statistics of the feedback
channel is perturbed. The suppression of spike-train correlations and
population-rate fluctuations by inhibitory feedback can be observed both in
purely inhibitory and in excitatory-inhibitory networks. The effect is fully
understood by a linear theory and becomes already apparent at the macroscopic
level of the population averaged activity. At the microscopic level,
shared-input correlations are suppressed by spike-train correlations: In purely
inhibitory networks, they are canceled by negative spike-train correlations. In
excitatory-inhibitory networks, spike-train correlations are typically
positive. Here, the suppression of input correlations is not a result of the
mere existence of correlations between excitatory (E) and inhibitory (I)
neurons, but a consequence of a particular structure of correlations among the
three possible pairings (EE, EI, II)
Statistics of spike trains in conductance-based neural networks: Rigorous results
We consider a conductance based neural network inspired by the generalized
Integrate and Fire model introduced by Rudolph and Destexhe. We show the
existence and uniqueness of a unique Gibbs distribution characterizing spike
train statistics. The corresponding Gibbs potential is explicitly computed.
These results hold in presence of a time-dependent stimulus and apply therefore
to non-stationary dynamics.Comment: 42 pages, 1 figure, to appear in Journal of Mathematical Neuroscienc
Topological exploration of artificial neuronal network dynamics
One of the paramount challenges in neuroscience is to understand the dynamics
of individual neurons and how they give rise to network dynamics when
interconnected. Historically, researchers have resorted to graph theory,
statistics, and statistical mechanics to describe the spatiotemporal structure
of such network dynamics. Our novel approach employs tools from algebraic
topology to characterize the global properties of network structure and
dynamics.
We propose a method based on persistent homology to automatically classify
network dynamics using topological features of spaces built from various
spike-train distances. We investigate the efficacy of our method by simulating
activity in three small artificial neural networks with different sets of
parameters, giving rise to dynamics that can be classified into four regimes.
We then compute three measures of spike train similarity and use persistent
homology to extract topological features that are fundamentally different from
those used in traditional methods. Our results show that a machine learning
classifier trained on these features can accurately predict the regime of the
network it was trained on and also generalize to other networks that were not
presented during training. Moreover, we demonstrate that using features
extracted from multiple spike-train distances systematically improves the
performance of our method
How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation
This paper addresses two questions in the context of neuronal networks
dynamics, using methods from dynamical systems theory and statistical physics:
(i) How to characterize the statistical properties of sequences of action
potentials ("spike trains") produced by neuronal networks ? and; (ii) what are
the effects of synaptic plasticity on these statistics ? We introduce a
framework in which spike trains are associated to a coding of membrane
potential trajectories, and actually, constitute a symbolic coding in important
explicit examples (the so-called gIF models). On this basis, we use the
thermodynamic formalism from ergodic theory to show how Gibbs distributions are
natural probability measures to describe the statistics of spike trains, given
the empirical averages of prescribed quantities. As a second result, we show
that Gibbs distributions naturally arise when considering "slow" synaptic
plasticity rules where the characteristic time for synapse adaptation is quite
longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure
- …