358 research outputs found
Homeostatic plasticity and external input shape neural network dynamics
In vitro and in vivo spiking activity clearly differ. Whereas networks in
vitro develop strong bursts separated by periods of very little spiking
activity, in vivo cortical networks show continuous activity. This is puzzling
considering that both networks presumably share similar single-neuron dynamics
and plasticity rules. We propose that the defining difference between in vitro
and in vivo dynamics is the strength of external input. In vitro, networks are
virtually isolated, whereas in vivo every brain area receives continuous input.
We analyze a model of spiking neurons in which the input strength, mediated by
spike rate homeostasis, determines the characteristics of the dynamical state.
In more detail, our analytical and numerical results on various network
topologies show consistently that under increasing input, homeostatic
plasticity generates distinct dynamic states, from bursting, to
close-to-critical, reverberating and irregular states. This implies that the
dynamic state of a neural network is not fixed but can readily adapt to the
input strengths. Indeed, our results match experimental spike recordings in
vitro and in vivo: the in vitro bursting behavior is consistent with a state
generated by very low network input (< 0.1%), whereas in vivo activity suggests
that on the order of 1% recorded spikes are input-driven, resulting in
reverberating dynamics. Importantly, this predicts that one can abolish the
ubiquitous bursts of in vitro preparations, and instead impose dynamics
comparable to in vivo activity by exposing the system to weak long-term
stimulation, thereby opening new paths to establish an in vivo-like assay in
vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.
Impact of network structure and cellular response on spike time correlations
Novel experimental techniques reveal the simultaneous activity of larger and
larger numbers of neurons. As a result there is increasing interest in the
structure of cooperative -- or correlated -- activity in neural populations,
and in the possible impact of such correlations on the neural code. A
fundamental theoretical challenge is to understand how the architecture of
network connectivity along with the dynamical properties of single cells shape
the magnitude and timescale of correlations. We provide a general approach to
this problem by extending prior techniques based on linear response theory. We
consider networks of general integrate-and-fire cells with arbitrary
architecture, and provide explicit expressions for the approximate
cross-correlation between constituent cells. These correlations depend strongly
on the operating point (input mean and variance) of the neurons, even when
connectivity is fixed. Moreover, the approximations admit an expansion in
powers of the matrices that describe the network architecture. This expansion
can be readily interpreted in terms of paths between different cells. We apply
our results to large excitatory-inhibitory networks, and demonstrate first how
precise balance --- or lack thereof --- between the strengths and timescales of
excitatory and inhibitory synapses is reflected in the overall correlation
structure of the network. We then derive explicit expressions for the average
correlation structure in randomly connected networks. These expressions help to
identify the important factors that shape coordinated neural activity in such
networks
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
Training deep neural density estimators to identify mechanistic models of neural dynamics
Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-- trained using model simulations-- to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features, and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics
Motif Statistics and Spike Correlations in Neuronal Networks
Motifs are patterns of subgraphs of complex networks. We studied the impact
of such patterns of connectivity on the level of correlated, or synchronized,
spiking activity among pairs of cells in a recurrent network model of integrate
and fire neurons. For a range of network architectures, we find that the
pairwise correlation coefficients, averaged across the network, can be closely
approximated using only three statistics of network connectivity. These are the
overall network connection probability and the frequencies of two second-order
motifs: diverging motifs, in which one cell provides input to two others, and
chain motifs, in which two cells are connected via a third intermediary cell.
Specifically, the prevalence of diverging and chain motifs tends to increase
correlation. Our method is based on linear response theory, which enables us to
express spiking statistics using linear algebra, and a resumming technique,
which extrapolates from second order motifs to predict the overall effect of
coupling on network correlation. Our motif-based results seek to isolate the
effect of network architecture perturbatively from a known network state
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
Efficient Transmission of Subthreshold Signals in Complex Networks of Spiking Neurons
We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances—that naturally balances the network with excitatory and inhibitory synapses—and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.The authors acknowledge support from the Spanish Ministry of economy and competitiveness under the project FIS2013-43201-P
Robust short-term memory without synaptic learning
Short-term memory in the brain cannot in general be explained the way
long-term memory can -- as a gradual modification of synaptic weights -- since
it takes place too quickly. Theories based on some form of cellular
bistability, however, do not seem able to account for the fact that noisy
neurons can collectively store information in a robust manner. We show how a
sufficiently clustered network of simple model neurons can be instantly induced
into metastable states capable of retaining information for a short time (a few
seconds). The mechanism is robust to different network topologies and kinds of
neural model. This could constitute a viable means available to the brain for
sensory and/or short-term memory with no need of synaptic learning. Relevant
phenomena described by neurobiology and psychology, such as local
synchronization of synaptic inputs and power-law statistics of forgetting
avalanches, emerge naturally from this mechanism, and we suggest possible
experiments to test its viability in more biological settings.Comment: 20 pages, 9 figures. Amended to include section on spiking neurons,
with general rewrit
- …