836 research outputs found

    Untangling cross-frequency coupling in neuroscience

    Full text link
    Cross-frequency coupling (CFC) has been proposed to coordinate neural dynamics across spatial and temporal scales. Despite its potential relevance for understanding healthy and pathological brain function, the standard CFC analysis and physiological interpretation come with fundamental problems. For example, apparent CFC can appear because of spectral correlations due to common non-stationarities that may arise in the total absence of interactions between neural frequency components. To provide a road map towards an improved mechanistic understanding of CFC, we organize the available and potential novel statistical/modeling approaches according to their biophysical interpretability. While we do not provide solutions for all the problems described, we provide a list of practical recommendations to avoid common errors and to enhance the interpretability of CFC analysis.Comment: 47 pages, 12 figures, including supplementary materia

    TRENTOOL: A Matlab open source toolbox to analyse information flow in time series data with transfer entropy

    Get PDF
    Background: Transfer entropy (TE) is a measure for the detection of directed interactions. Transfer entropy is an information theoretic implementation of Wiener's principle of observational causality. It offers an approach to the detection of neuronal interactions that is free of an explicit model of the interactions. Hence, it offers the power to analyze linear and nonlinear interactions alike. This allows for example the comprehensive analysis of directed interactions in neural networks at various levels of description. Here we present the open-source MATLAB toolbox TRENTOOL that allows the user to handle the considerable complexity of this measure and to validate the obtained results using non-parametrical statistical testing. We demonstrate the use of the toolbox and the performance of the algorithm on simulated data with nonlinear (quadratic) coupling and on local field potentials (LFP) recorded from the retina and the optic tectum of the turtle (Pseudemys scripta elegans) where a neuronal one-way connection is likely present. Results: In simulated data TE detected information flow in the simulated direction reliably with false positives not exceeding the rates expected under the null hypothesis. In the LFP data we found directed interactions from the retina to the tectum, despite the complicated signal transformations between these stages. No false positive interactions in the reverse directions were detected. Conclusions: TRENTOOL is an implementation of transfer entropy and mutual information analysis that aims to support the user in the application of this information theoretic measure. TRENTOOL is implemented as a MATLAB toolbox and available under an open source license (GPL v3). For the use with neural data TRENTOOL seamlessly integrates with the popular FieldTrip toolbox

    Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process

    Get PDF
    Stochastic leaky integrate-and-fire models are popular due to their simplicity and statistical tractability. They have been widely applied to gain understanding of the underlying mechanisms for spike timing in neurons, and have served as building blocks for more elaborate models. Especially the Ornstein–Uhlenbeck process is popular to describe the stochastic fluctuations in the membrane potential of a neuron, but also other models like the square-root model or models with a non-linear drift are sometimes applied. Data that can be described by such models have to be stationary and thus, the simple models can only be applied over short time windows. However, experimental data show varying time constants, state dependent noise, a graded firing threshold and time-inhomogeneous input. In the present study we build a jump diffusion model that incorporates these features, and introduce a firing mechanism with a state dependent intensity. In addition, we suggest statistical methods to estimate all unknown quantities and apply these to analyze turtle motoneuron membrane potentials. Finally, simulated and real data are compared and discussed. We find that a square-root diffusion describes the data much better than an Ornstein–Uhlenbeck process with constant diffusion coefficient. Further, the membrane time constant decreases with increasing depolarization, as expected from the increase in synaptic conductance. The network activity, which the neuron is exposed to, can be reasonably estimated to be a threshold version of the nerve output from the network. Moreover, the spiking characteristics are well described by a Poisson spike train with an intensity depending exponentially on the membrane potential

    Novel Use of Matched Filtering for Synaptic Event Detection and Extraction

    Get PDF
    Efficient and dependable methods for detection and measurement of synaptic events are important for studies of synaptic physiology and neuronal circuit connectivity. As the published methods with detection algorithms based upon amplitude thresholding and fixed or scaled template comparisons are of limited utility for detection of signals with variable amplitudes and superimposed events that have complex waveforms, previous techniques are not applicable for detection of evoked synaptic events in photostimulation and other similar experimental situations. Here we report on a novel technique that combines the design of a bank of approximate matched filters with the detection and estimation theory to automatically detect and extract photostimluation-evoked excitatory postsynaptic currents (EPSCs) from individually recorded neurons in cortical circuit mapping experiments. The sensitivity and specificity of the method were evaluated on both simulated and experimental data, with its performance comparable to that of visual event detection performed by human operators. This new technique was applied to quantify and compare the EPSCs obtained from excitatory pyramidal cells and fast-spiking interneurons. In addition, our technique has been further applied to the detection and analysis of inhibitory postsynaptic current (IPSC) responses. Given the general purpose of our matched filtering and signal recognition algorithms, we expect that our technique can be appropriately modified and applied to detect and extract other types of electrophysiological and optical imaging signals

    Retooling computational techniques for EEG-based neurocognitive modeling of children's data, validity and prospects for learning and education

    Get PDF
    This paper describes continuing research on the building of neurocognitive models of the internal mental and brain processes of children using a novel adapted combination of existing computational approaches and tools, and using electro-encephalographic (EEG) data to validate the models. The guiding working model which was pragmatically selected for investigation was the established and widely used Adaptive Control of Thought-Rational (ACT-R) modeling architecture from cognitive science. The anatomo-functional circuitry covered by ACT-R is validated by MRI-based neuroscience research. The present experimental data was obtained from a cognitive neuropsychology study involving preschool children (aged 46), which measured their visual selective attention and word comprehension behaviors. The collection and analysis of Event-Related Potentials (ERPs) from the EEG data allowed for the identification of sources of electrical activity known as dipoles within the cortex, using a combination of computational tools (Independent Component Analysis, FASTICA; EEG-Lab DIPFIT). The results were then used to build neurocognitive models based on Python ACT-R such that the patterns and the timings of the measured EEG could be reproduced as simplified symbolic representations of spikes, built through simplified electric-field simulations. The models simulated ultimately accounted for more than three-quarters of variations spatially and temporally in all electrical potential measurements (fit of model to dipole data expressed as R 2 ranged between 0.75 and 0.98; P < 0.0001). Implications for practical uses of the present work are discussed for learning and educational applications in non-clinical and special needs children's populations, and for the possible use of non-experts (teachers and parents)

    Retooling Computational Techniques for EEG-Based Neurocognitive Modeling of Children's Data, Validity and Prospects for Learning and Education

    Get PDF
    This paper describes continuing research on the building of neurocognitive models of the internal mental and brain processes of children using a novel adapted combination of existing computational approaches and tools, and using electro-encephalographic (EEG) data to validate the models. The guiding working model which was pragmatically selected for investigation was the established and widely used Adaptive Control of Thought-Rational (ACT-R) modeling architecture from cognitive science. The anatomo-functional circuitry covered by ACT-R is validated by MRI-based neuroscience research. The present experimental data was obtained from a cognitive neuropsychology study involving preschool children (aged 4–6), which measured their visual selective attention and word comprehension behaviors. The collection and analysis of Event-Related Potentials (ERPs) from the EEG data allowed for the identification of sources of electrical activity known as dipoles within the cortex, using a combination of computational tools (Independent Component Analysis, FASTICA; EEG-Lab DIPFIT). The results were then used to build neurocognitive models based on Python ACT-R such that the patterns and the timings of the measured EEG could be reproduced as simplified symbolic representations of spikes, built through simplified electric-field simulations. The models simulated ultimately accounted for more than three-quarters of variations spatially and temporally in all electrical potential measurements (fit of model to dipole data expressed as R2 ranged between 0.75 and 0.98; P &lt; 0.0001). Implications for practical uses of the present work are discussed for learning and educational applications in non-clinical and special needs children's populations, and for the possible use of non-experts (teachers and parents)

    A Comparative Analysis of Purkinje Cells Across Species Combining Modelling, Machine Learning and Information Theory

    Get PDF
    There have been a number of computational modelling studies that aim to replicate the cerebellar Purkinje cell, though these typically use the morphology of rodent cells. While many species, including rodents, display intricate dendritic branching, it is not a universal feature among Purkinje cells. This study uses morphological reconstructions of 24 Purkinje cells from seven species to explore the changes that occur to the cell through evolution and examine whether this has an effect on the processing capacity of the cell. This is achieved by combining several modes of study in order to gain a comprehensive overview of the variations between the cells in both morphology and behaviour. Passive and active computational models of the cells were created, using the same electrophysiological parameters and ion channels for all models, to characterise the voltage attenuation and electrophysiological behaviour of the cells. These results and several measures of branching and size were then used to look for clusters in the data set using machine learning techniques. They were also used to visualise the differences within each species group. Information theory methods were also employed to compare the estimated information transfer from input to output across each cell. Along with a literature review into what is known about Purkinje cells and the cerebellum across the phylogenetic tree, these results show that while there are some obvious differences in morphology, the variation within species groups in electrophysiological behaviour is often as high as between them. This suggests that morphological changes may occur in order to conserve behaviour in the face of other changes to the cerebellum

    Fast and robust learning by reinforcement signals: explorations in the insect brain

    Get PDF
    We propose a model for pattern recognition in the insect brain. Departing from a well-known body of knowledge about the insect brain, we investigate which of the potentially present features may be useful to learn input patterns rapidly and in a stable manner. The plasticity underlying pattern recognition is situated in the insect mushroom bodies and requires an error signal to associate the stimulus with a proper response. As a proof of concept, we used our model insect brain to classify the well-known MNIST database of handwritten digits, a popular benchmark for classifiers. We show that the structural organization of the insect brain appears to be suitable for both fast learning of new stimuli and reasonable performance in stationary conditions. Furthermore, it is extremely robust to damage to the brain structures involved in sensory processing. Finally, we suggest that spatiotemporal dynamics can improve the level of confidence in a classification decision. The proposed approach allows testing the effect of hypothesized mechanisms rather than speculating on their benefit for system performance or confidence in its responses

    A Graph Algorithmic Approach to Separate Direct from Indirect Neural Interactions

    Full text link
    Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking their multivariate nature: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable due to combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm flags potentially spurious edges, which may then be pruned from the network. This produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation to test its performance. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios. Our approach is a tractable and data-efficient way of reconstructing approximative networks of multivariate interactions. It is preferable if available data are limited or if fully multivariate approaches are computationally infeasible.Comment: 24 pages, 8 figures, published in PLOS On
    corecore