9,791 research outputs found

    A study of dependency features of spike trains through copulas

    Get PDF
    Simultaneous recordings from many neurons hide important information and the connections characterizing the network remain generally undiscovered despite the progresses of statistical and machine learning techniques. Discerning the presence of direct links between neuron from data is still a not completely solved problem. To enlarge the number of tools for detecting the underlying network structure, we propose here the use of copulas, pursuing on a research direction we started in [1]. Here, we adapt their use to distinguish different types of connections on a very simple network. Our proposal consists in choosing suitable random intervals in pairs of spike trains determining the shapes of their copulas. We show that this approach allows to detect different types of dependencies. We illustrate the features of the proposed method on synthetic data from suitably connected networks of two or three formal neurons directly connected or influenced by the surrounding network. We show how a smart choice of pairs of random times together with the use of empirical copulas allows to discern between direct and un-direct interactions

    A generative spike train model with time-structured higher order correlations

    Get PDF
    Emerging technologies are revealing the spiking activity in ever larger neural ensembles. Frequently, this spiking is far from independent, with correlations in the spike times of different cells. Understanding how such correlations impact the dynamics and function of neural ensembles remains an important open problem. Here we describe a new, generative model for correlated spike trains that can exhibit many of the features observed in data. Extending prior work in mathematical finance, this generalized thinning and shift (GTaS) model creates marginally Poisson spike trains with diverse temporal correlation structures. We give several examples which highlight the model's flexibility and utility. For instance, we use it to examine how a neural network responds to highly structured patterns of inputs. We then show that the GTaS model is analytically tractable, and derive cumulant densities of all orders in terms of model parameters. The GTaS framework can therefore be an important tool in the experimental and theoretical exploration of neural dynamics

    Generation of theta activity (RSA) in the cingulate cortex of the rat

    Get PDF
    Unit activity recorded from the cingulate cortex during theta rhythm shows periodic trains of spikes which are phase-locked to the local theta field potential waves. These cortical theta units were also shown to be correlated with hippocampal theta units. These findings, along with the fact that theta field potentials show a phase reversal within the cingulate cortex, lead to the conclusion that this cortical area is a source of theta activity

    Evidence for Information Processing in the Brain

    Get PDF
    Many cognitive and neuroscientists attempt to assign biological functions to brain structures. To achieve this end, scientists perform experiments that relate the physical properties of brain structures to organism-level abilities, behaviors, and environmental stimuli. Researchers make use of various measuring instruments and methodological techniques to obtain this kind of relational evidence, ranging from single-unit electrophysiology and optogenetics to whole brain functional MRI. Each experiment is intended to identify brain function. However, seemingly independent of experimental evidence, many cognitive scientists, neuroscientists, and philosophers of science assume that the brain processes information as a scientific fact. In this work we analyze categories of relational evidence and find that although physical features of specific brain areas selectively covary with external stimuli and abilities, and that the brain shows reliable causal organization, there is no direct evidence supporting the claim that information processing is a natural function of the brain. We conclude that the belief in brain information processing adds little to the science of cognitive science and functions primarily as a metaphor for efficient communication of neuroscientific data

    Which spike train distance is most suitable for distinguishing rate and temporal coding?

    Full text link
    Background: It is commonly assumed in neuronal coding that repeated presentations of a stimulus to a coding neuron elicit similar responses. One common way to assess similarity are spike train distances. These can be divided into spike-resolved, such as the Victor-Purpura and the van Rossum distance, and time-resolved, e.g. the ISI-, the SPIKE- and the RI-SPIKE-distance. New Method: We use independent steady-rate Poisson processes as surrogates for spike trains with fixed rate and no timing information to address two basic questions: How does the sensitivity of the different spike train distances to temporal coding depend on the rates of the two processes and how do the distances deal with very low rates? Results: Spike-resolved distances always contain rate information even for parameters indicating time coding. This is an issue for reasonably high rates but beneficial for very low rates. In contrast, the operational range for detecting time coding of time-resolved distances is superior at normal rates, but these measures produce artefacts at very low rates. The RI-SPIKE-distance is the only measure that is sensitive to timing information only. Comparison with Existing Methods: While our results on rate-dependent expectation values for the spike-resolved distances agree with \citet{Chicharro11}, we here go one step further and specifically investigate applicability for very low rates. Conclusions: The most appropriate measure depends on the rates of the data being analysed. Accordingly, we summarize our results in one table that allows an easy selection of the preferred measure for any kind of data.Comment: 14 pages, 6 Figures, 1 Tabl

    The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study

    Get PDF
    High-level brain function such as memory, classification or reasoning can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear sub-threshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with non-linear, conductance-based synapses. Emulations of these networks on the analog neuromorphic hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm ...Comment: 20 pages, 10 figures, supplement

    Consequences of converting graded to action potentials upon neural information coding and energy efficiency

    Get PDF
    Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a ‘footprint’ in the generator potential that obscures incoming signals. These three processes reduce information rates by ~50% in generator potentials, to ~3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation

    Decorrelation of neural-network activity by inhibitory feedback

    Get PDF
    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent theoretical and experimental studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. By means of a linear network model and simulations of networks of leaky integrate-and-fire neurons, we show that shared-input correlations are efficiently suppressed by inhibitory feedback. To elucidate the effect of feedback, we compare the responses of the intact recurrent network and systems where the statistics of the feedback channel is perturbed. The suppression of spike-train correlations and population-rate fluctuations by inhibitory feedback can be observed both in purely inhibitory and in excitatory-inhibitory networks. The effect is fully understood by a linear theory and becomes already apparent at the macroscopic level of the population averaged activity. At the microscopic level, shared-input correlations are suppressed by spike-train correlations: In purely inhibitory networks, they are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II)
    corecore