35 research outputs found

    Signatures of Synchrony in Pairwise Count Correlations

    Get PDF
    Concerted neural activity can reflect specific features of sensory stimuli or behavioral tasks. Correlation coefficients and count correlations are frequently used to measure correlations between neurons, design synthetic spike trains and build population models. But are correlation coefficients always a reliable measure of input correlations? Here, we consider a stochastic model for the generation of correlated spike sequences which replicate neuronal pairwise correlations in many important aspects. We investigate under which conditions the correlation coefficients reflect the degree of input synchrony and when they can be used to build population models. We find that correlation coefficients can be a poor indicator of input synchrony for some cases of input correlations. In particular, count correlations computed for large time bins can vanish despite the presence of input correlations. These findings suggest that network models or potential coding schemes of neural population activity need to incorporate temporal properties of correlated inputs and take into consideration the regimes of firing rates and correlation strengths to ensure that their building blocks are an unambiguous measures of synchrony

    Correlations and Synchrony in Threshold Neuron Models

    Full text link
    We study how threshold model neurons transfer temporal and interneuronal input correlations to correlations of spikes. We find that the low common input regime is governed by firing rate dependent spike correlations which are sensitive to the detailed structure of input correlation functions. In the high common input regime the spike correlations are insensitive to the firing rate and exhibit a universal peak shape independent of input correlations. Rate heterogeneous pairs driven by common inputs in general exhibit asymmetric spike correlations. All predictions are confirmed in in vitro experiments with cortical neurons driven by synthesized fluctuating input currents.Comment: 5 pages, 10 figure

    Modulation of working memory duration by synaptic and astrocytic mechanisms

    Get PDF
    Short-term synaptic plasticity and modulations of the presynaptic vesicle release rate are key components of many working memory models. At the same time, an increasing number of studies suggests a potential role of astrocytes in modulating higher cognitive function such as WM through their influence on synaptic transmission. Which influence astrocytic signaling could have on the stability and duration of WM representations, however, is still unclear. Here, we introduce a slow, activity-dependent astrocytic regulation of the presynaptic release probability in a synaptic attractor model of WM. We compare and analyze simulations of a simple WM protocol in firing rate and spiking networks with and without astrocytic regulation, and underpin our observations with analyses of the phase space dynamics in the rate network. We find that the duration and stability of working memory representations are altered by astrocytic signaling and by noise. We show that astrocytic signaling modulates the mean duration of WM representations. Moreover, if the astrocytic regulation is strong, a slow presynaptic timescale introduces a ‘window of vulnerability’, during which WM representations are easily disruptable by noise before being stabilized. We identify two mechanisms through which noise from different sources in the network can either stabilize or destabilize WM representations. Our findings suggest that (i) astrocytic regulation can act as a crucial determinant for the duration of WM representations in synaptic attractor models of WM, and (ii) that astrocytic signaling could facilitate different mechanisms for volitional top-down control of WM representations and their duration

    Linking spontaneous and stimulated spine dynamics

    Get PDF
    Our brains continuously acquire and store memories through synaptic plasticity. However, spontaneous synaptic changes can also occur and pose a challenge for maintaining stable memories. Despite fluctuations in synapse size, recent studies have shown that key population-level synaptic properties remain stable over time. This raises the question of how local synaptic plasticity affects the global population-level synaptic size distribution and whether individual synapses undergoing plasticity escape the stable distribution to encode specific memories. To address this question, we (i) studied spontaneously evolving spines and (ii) induced synaptic potentiation at selected sites while observing the spine distribution pre- and post-stimulation. We designed a stochastic model to describe how the current size of a synapse affects its future size under baseline and stimulation conditions and how these local effects give rise to population-level synaptic shifts. Our study offers insights into how seemingly spontaneous synaptic fluctuations and local plasticity both contribute to population-level synaptic dynamics

    How to incorporate biological insights into network models and why it matters

    Get PDF
    Due to the staggering complexity of the brain and its neural circuitry, neuroscientists rely on the analysis of mathematical models to elucidate its function. From Hodgkin and Huxley's detailed description of the action potential in 1952 to today, new theories and increasing computational power have opened up novel avenues to study how neural circuits implement the computations that underlie behaviour. Computational neuroscientists have developed many models of neural circuits that differ in complexity, biological realism or emergent network properties. With recent advances in experimental techniques for detailed anatomical reconstructions or large-scale activity recordings, rich biological data have become more available. The challenge when building network models is to reflect experimental results, either through a high level of detail or by finding an appropriate level of abstraction. Meanwhile, machine learning has facilitated the development of artificial neural networks, which are trained to perform specific tasks. While they have proven successful at achieving task-oriented behaviour, they are often abstract constructs that differ in many features from the physiology of brain circuits. Thus, it is unclear whether the mechanisms underlying computation in biological circuits can be investigated by analysing artificial networks that accomplish the same function but differ in their mechanisms. Here, we argue that building biologically realistic network models is crucial to establishing causal relationships between neurons, synapses, circuits and behaviour. More specifically, we advocate for network models that consider the connectivity structure and the recorded activity dynamics while evaluating task performance

    In vivo extracellular recordings of thalamic and cortical visual responses reveal V1 connectivity rules

    Get PDF
    The brain’s connectome provides the scaffold for canonical neural computations. However, a comparison of connectivity studies in the mouse primary visual cortex (V1) reveals that the average number and strength of connections between specific neuron types can vary. Can variability in V1 connectivity measurements coexist with canonical neural computations? We developed a theory-driven approach to deduce V1 network connectivity from visual responses in mouse V1 and visual thalamus (dLGN). Our method revealed that the same recorded visual responses were captured by multiple connectivity configurations. Remarkably, the magnitude and selectivity of connectivity weights followed a specific order across most of the inferred connectivity configurations. We argue that this order stems from the specific shapes of the recorded contrast response functions and contrast invariance of orientation tuning. Remarkably, despite variability across connectivity studies, connectivity weights computed from individual published connectivity reports followed the order we identified with our method, suggesting that the relations between the weights, rather than their magnitudes, represent a connectivity motif supporting canonical V1 computations
    corecore