3,949 research outputs found
Response variability in balanced cortical networks
We study the spike statistics of neurons in a network with dynamically
balanced excitation and inhibition. Our model, intended to represent a generic
cortical column, comprises randomly connected excitatory and inhibitory leaky
integrate-and-fire neurons, driven by excitatory input from an external
population. The high connectivity permits a mean-field description in which
synaptic currents can be treated as Gaussian noise, the mean and
autocorrelation function of which are calculated self-consistently from the
firing statistics of single model neurons. Within this description, we find
that the irregularity of spike trains is controlled mainly by the strength of
the synapses relative to the difference between the firing threshold and the
post-firing reset level of the membrane potential. For moderately strong
synapses we find spike statistics very similar to those observed in primary
visual cortex.Comment: 22 pages, 7 figures, submitted to Neural Computatio
Tracking dynamic interactions between structural and functional connectivity : a TMS/EEG-dMRI study
Transcranial magnetic stimulation (TMS) in combination with neuroimaging techniques allows to measure the effects of a direct perturbation of the brain. When coupled with high-density electroencephalography (TMS/hd-EEG), TMS pulses revealed electrophysiological signatures of different cortical modules in health and disease. However, the neural underpinnings of these signatures remain unclear. Here, by applying multimodal analyses of cortical response to TMS recordings and diffusion magnetic resonance imaging (dMRI) tractography, we investigated the relationship between functional and structural features of different cortical modules in a cohort of awake healthy volunteers. For each subject, we computed directed functional connectivity interactions between cortical areas from the source-reconstructed TMS/hd-EEG recordings and correlated them with the correspondent structural connectivity matrix extracted from dMRI tractography, in three different frequency bands (alpha, beta, gamma) and two sites of stimulation (left precuneus and left premotor). Each stimulated area appeared to mainly respond to TMS by being functionally elicited in specific frequency bands, that is, beta for precuneus and gamma for premotor. We also observed a temporary decrease in the whole-brain correlation between directed functional connectivity and structural connectivity after TMS in all frequency bands. Notably, when focusing on the stimulated areas only, we found that the structure-function correlation significantly increases over time in the premotor area controlateral to TMS. Our study points out the importance of taking into account the major role played by different cortical oscillations when investigating the mechanisms for integration and segregation of information in the human brain
Ising Models for Inferring Network Structure From Spike Data
Now that spike trains from many neurons can be recorded simultaneously, there
is a need for methods to decode these data to learn about the networks that
these neurons are part of. One approach to this problem is to adjust the
parameters of a simple model network to make its spike trains resemble the data
as much as possible. The connections in the model network can then give us an
idea of how the real neurons that generated the data are connected and how they
influence each other. In this chapter we describe how to do this for the
simplest kind of model: an Ising network. We derive algorithms for finding the
best model connection strengths for fitting a given data set, as well as faster
approximate algorithms based on mean field theory. We test the performance of
these algorithms on data from model networks and experiments.Comment: To appear in "Principles of Neural Coding", edited by Stefano Panzeri
and Rodrigo Quian Quirog
Locking of correlated neural activity to ongoing oscillations
Population-wide oscillations are ubiquitously observed in mesoscopic signals
of cortical activity. In these network states a global oscillatory cycle
modulates the propensity of neurons to fire. Synchronous activation of neurons
has been hypothesized to be a separate channel of signal processing information
in the brain. A salient question is therefore if and how oscillations interact
with spike synchrony and in how far these channels can be considered separate.
Experiments indeed showed that correlated spiking co-modulates with the static
firing rate and is also tightly locked to the phase of beta-oscillations. While
the dependence of correlations on the mean rate is well understood in
feed-forward networks, it remains unclear why and by which mechanisms
correlations tightly lock to an oscillatory cycle. We here demonstrate that
such correlated activation of pairs of neurons is qualitatively explained by
periodically-driven random networks. We identify the mechanisms by which
covariances depend on a driving periodic stimulus. Mean-field theory combined
with linear response theory yields closed-form expressions for the
cyclostationary mean activities and pairwise zero-time-lag covariances of
binary recurrent random networks. Two distinct mechanisms cause time-dependent
covariances: the modulation of the susceptibility of single neurons (via the
external input and network feedback) and the time-varying variances of single
unit activities. For some parameters, the effectively inhibitory recurrent
feedback leads to resonant covariances even if mean activities show
non-resonant behavior. Our analytical results open the question of
time-modulated synchronous activity to a quantitative analysis.Comment: 57 pages, 12 figures, published versio
VIOLA - A multi-purpose and web-based visualization tool for neuronal-network simulation output
Neuronal network models and corresponding computer simulations are invaluable
tools to aid the interpretation of the relationship between neuron properties,
connectivity and measured activity in cortical tissue. Spatiotemporal patterns
of activity propagating across the cortical surface as observed experimentally
can for example be described by neuronal network models with layered geometry
and distance-dependent connectivity. The interpretation of the resulting stream
of multi-modal and multi-dimensional simulation data calls for integrating
interactive visualization steps into existing simulation-analysis workflows.
Here, we present a set of interactive visualization concepts called views for
the visual analysis of activity data in topological network models, and a
corresponding reference implementation VIOLA (VIsualization Of Layer Activity).
The software is a lightweight, open-source, web-based and platform-independent
application combining and adapting modern interactive visualization paradigms,
such as coordinated multiple views, for massively parallel neurophysiological
data. For a use-case demonstration we consider spiking activity data of a
two-population, layered point-neuron network model subject to a spatially
confined excitation originating from an external population. With the multiple
coordinated views, an explorative and qualitative assessment of the
spatiotemporal features of neuronal activity can be performed upfront of a
detailed quantitative data analysis of specific aspects of the data.
Furthermore, ongoing efforts including the European Human Brain Project aim at
providing online user portals for integrated model development, simulation,
analysis and provenance tracking, wherein interactive visual analysis tools are
one component. Browser-compatible, web-technology based solutions are therefore
required. Within this scope, with VIOLA we provide a first prototype.Comment: 38 pages, 10 figures, 3 table
The Effect of Nonstationarity on Models Inferred from Neural Data
Neurons subject to a common non-stationary input may exhibit a correlated
firing behavior. Correlations in the statistics of neural spike trains also
arise as the effect of interaction between neurons. Here we show that these two
situations can be distinguished, with machine learning techniques, provided the
data are rich enough. In order to do this, we study the problem of inferring a
kinetic Ising model, stationary or nonstationary, from the available data. We
apply the inference procedure to two data sets: one from salamander retinal
ganglion cells and the other from a realistic computational cortical network
model. We show that many aspects of the concerted activity of the salamander
retinal neurons can be traced simply to the external input. A model of
non-interacting neurons subject to a non-stationary external field outperforms
a model with stationary input with couplings between neurons, even accounting
for the differences in the number of model parameters. When couplings are added
to the non-stationary model, for the retinal data, little is gained: the
inferred couplings are generally not significant. Likewise, the distribution of
the sizes of sets of neurons that spike simultaneously and the frequency of
spike patterns as function of their rank (Zipf plots) are well-explained by an
independent-neuron model with time-dependent external input, and adding
connections to such a model does not offer significant improvement. For the
cortical model data, robust couplings, well correlated with the real
connections, can be inferred using the non-stationary model. Adding connections
to this model slightly improves the agreement with the data for the probability
of synchronous spikes but hardly affects the Zipf plot.Comment: version in press in J Stat Mec
State Dependence of Stimulus-Induced Variability Tuning in Macaque MT
Behavioral states marked by varying levels of arousal and attention modulate
some properties of cortical responses (e.g. average firing rates or pairwise
correlations), yet it is not fully understood what drives these response
changes and how they might affect downstream stimulus decoding. Here we show
that changes in state modulate the tuning of response variance-to-mean ratios
(Fano factors) in a fashion that is neither predicted by a Poisson spiking
model nor changes in the mean firing rate, with a substantial effect on
stimulus discriminability. We recorded motion-sensitive neurons in middle
temporal cortex (MT) in two states: alert fixation and light, opioid
anesthesia. Anesthesia tended to lower average spike counts, without decreasing
trial-to-trial variability compared to the alert state. Under anesthesia,
within-trial fluctuations in excitability were correlated over longer time
scales compared to the alert state, creating supra-Poisson Fano factors. In
contrast, alert-state MT neurons have higher mean firing rates and largely
sub-Poisson variability that is stimulus-dependent and cannot be explained by
firing rate differences alone. The absence of such stimulus-induced variability
tuning in the anesthetized state suggests different sources of variability
between states. A simple model explains state-dependent shifts in the
distribution of observed Fano factors via a suppression in the variance of gain
fluctuations in the alert state. A population model with stimulus-induced
variability tuning and behaviorally constrained information-limiting
correlations explores the potential enhancement in stimulus discriminability by
the cortical population in the alert state.Comment: 36 pages, 18 figure
Self-organization of network dynamics into local quantized states
Self-organization and pattern formation in network-organized systems emerges
from the collective activation and interaction of many interconnected units. A
striking feature of these non-equilibrium structures is that they are often
localized and robust: only a small subset of the nodes, or cell assembly, is
activated. Understanding the role of cell assemblies as basic functional units
in neural networks and socio-technical systems emerges as a fundamental
challenge in network theory. A key open question is how these elementary
building blocks emerge, and how they operate, linking structure and function in
complex networks. Here we show that a network analogue of the Swift-Hohenberg
continuum model---a minimal-ingredients model of nodal activation and
interaction within a complex network---is able to produce a complex suite of
localized patterns. Hence, the spontaneous formation of robust operational cell
assemblies in complex networks can be explained as the result of
self-organization, even in the absence of synaptic reinforcements. Our results
show that these self-organized, local structures can provide robust functional
units to understand natural and socio-technical network-organized processes.Comment: 11 pages, 4 figure
- …