898 research outputs found

    Reading the Neural Code: What do Spikes Mean for Behavior?

    Get PDF
    The present study reveals the existence of an intrinsic spatial code within neuronal spikes that predicts behavior. As rats learnt a T-maze procedural task, simultaneous changes in temporal occurrence of spikes and spike directivity are evidenced in “expert” neurons. While the number of spikes between the tone delivery and the beginning of turn phase reduced with learning, the generated spikes between these two events acquired behavioral meaning that is of highest value for action selection. Spike directivity is thus a hidden feature that reveals the semantics of each spike and in the current experiment, predicts the correct turn that the animal would subsequently make to obtain reward. Semantic representation of behavior can then be revealed as modulations in spike directivity during the time. This predictability of observed behavior based on subtle changes in spike directivity represents an important step towards reading and understanding the underlying neural code

    Emergence of Physiological Oscillation Frequencies in a Computer Model of Neocortex

    Get PDF
    Coordination of neocortical oscillations has been hypothesized to underlie the “binding” essential to cognitive function. However, the mechanisms that generate neocortical oscillations in physiological frequency bands remain unknown. We hypothesized that interlaminar relations in neocortex would provide multiple intermediate loops that would play particular roles in generating oscillations, adding different dynamics to the network. We simulated networks from sensory neocortex using nine columns of event-driven rule-based neurons wired according to anatomical data and driven with random white-noise synaptic inputs. We tuned the network to achieve realistic cell firing rates and to avoid population spikes. A physiological frequency spectrum appeared as an emergent property, displaying dominant frequencies that were not present in the inputs or in the intrinsic or activated frequencies of any of the cell groups. We monitored spectral changes while using minimal dynamical perturbation as a methodology through gradual introduction of hubs into individual layers. We found that hubs in layer 2/3 excitatory cells had the greatest influence on overall network activity, suggesting that this subpopulation was a primary generator of theta/beta strength in the network. Similarly, layer 2/3 interneurons appeared largely responsible for gamma activation through preferential attenuation of the rest of the spectrum. The network showed evidence of frequency homeostasis: increased activation of supragranular layers increased firing rates in the network without altering the spectral profile, and alteration in synaptic delays did not significantly shift spectral peaks. Direct comparison of the power spectra with experimentally recorded local field potentials from prefrontal cortex of awake rat showed substantial similarities, including comparable patterns of cross-frequency coupling

    Model-free reconstruction of neuronal network connectivity from calcium imaging signals

    Get PDF
    A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically unfeasible even in dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct approximations to network structural connectivities from network activity monitored through calcium fluorescence imaging. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time-series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the effective network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (e.g., bursting or non-bursting). We thus demonstrate how conditioning with respect to the global mean activity improves the performance of our method. [...] Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good reconstruction of the network clustering coefficient, allowing to discriminate between weakly or strongly clustered topologies, whereas on the other hand an approach based on cross-correlations would invariantly detect artificially high levels of clustering. Finally, we present the applicability of our method to real recordings of in vitro cortical cultures. We demonstrate that these networks are characterized by an elevated level of clustering compared to a random graph (although not extreme) and by a markedly non-local connectivity.Comment: 54 pages, 8 figures (+9 supplementary figures), 1 table; submitted for publicatio

    Spatiotemporal dynamics in spiking recurrent neural networks using modified-full-FORCE on EEG signals

    Get PDF
    Methods on modelling the human brain as a Complex System have increased remarkably in the literature as researchers seek to understand the underlying foundations behind cognition, behaviour, and perception. Computational methods, especially Graph Theory-based methods, have recently contributed significantly in understanding the wiring connectivity of the brain, modelling it as a set of nodes connected by edges. Therefore, the brain's spatiotemporal dynamics can be holistically studied by considering a network, which consists of many neurons, represented by nodes. Various models have been proposed for modelling such neurons. A recently proposed method in training such networks, called full-Force, produces networks that perform tasks with fewer neurons and greater noise robustness than previous least-squares approaches (i.e. FORCE method). In this paper, the first direct applicability of a variant of the full-Force method to biologically-motivated Spiking RNNs (SRNNs) is demonstrated. The SRNN is a graph consisting of modules. Each module is modelled as a Small-World Network (SWN), which is a specific type of a biologically-plausible graph. So, the first direct applicability of a variant of the full-Force method to modular SWNs is demonstrated, evaluated through regression and information theoretic metrics. For the first time, the aforementioned method is applied to spiking neuron models and trained on various real-life Electroencephalography (EEG) signals. To the best of the authors' knowledge, all the contributions of this paper are novel. Results show that trained SRNNs match EEG signals almost perfectly, while network dynamics can mimic the target dynamics. This demonstrates that the holistic setup of the network model and the neuron model which are both more biologically plausible than previous work, can be tuned into real biological signal dynamics

    Detecting multineuronal temporal patterns in parallel spike trains

    Get PDF
    We present a non-parametric and computationally efficient method that detects spatiotemporal firing patterns and pattern sequences in parallel spike trains and tests whether the observed numbers of repeating patterns and sequences on a given timescale are significantly different from those expected by chance. The method is generally applicable and uncovers coordinated activity with arbitrary precision by comparing it to appropriate surrogate data. The analysis of coherent patterns of spatially and temporally distributed spiking activity on various timescales enables the immediate tracking of diverse qualities of coordinated firing related to neuronal state changes and information processing. We apply the method to simulated data and multineuronal recordings from rat visual cortex and show that it reliably discriminates between data sets with random pattern occurrences and with additional exactly repeating spatiotemporal patterns and pattern sequences. Multineuronal cortical spiking activity appears to be precisely coordinated and exhibits a sequential organization beyond the cell assembly concept

    NetPyNE, a tool for data-driven multiscale modeling of brain circuits

    Get PDF
    Biophysical modeling of neuronal networks helps to integrate and interpret rapidly growing and disparate experimental datasets at multiple scales. The NetPyNE tool (www.netpyne.org) provides both programmatic and graphical interfaces to develop data-driven multiscale network models in NEURON. NetPyNE clearly separates model parameters from implementation code. Users provide specifications at a high level via a standardized declarative language, for example connectivity rules, to create millions of cell-to-cell connections. NetPyNE then enables users to generate the NEURON network, run efficiently parallelized simulations, optimize and explore network parameters through automated batch runs, and use built-in functions for visualization and analysis – connectivity matrices, voltage traces, spike raster plots, local field potentials, and information theoretic measures. NetPyNE also facilitates model sharing by exporting and importing standardized formats (NeuroML and SONATA). NetPyNE is already being used to teach computational neuroscience students and by modelers to investigate brain regions and phenomena
    corecore