3,152 research outputs found

    Bootstrap testing for cross-correlation under low firing activity

    Get PDF
    A new cross-correlation synchrony index for neural activity is proposed. The index is based on the integration of the kernel estimation of the cross-correlation function. It is used to test for the dynamic synchronization levels of spontaneous neural activity under two induced brain states: sleep-like and awake-like. Two bootstrap resampling plans are proposed to approximate the distribution of the test statistics. The results of the first bootstrap method indicate that it is useful to discern significant differences in the synchronization dynamics of brain states characterized by a neural activity with low firing rate. The second bootstrap method is useful to unveil subtle differences in the synchronization levels of the awake-like state, depending on the activation pathway.Comment: 22 pages, 7 figure

    Functional two-way analysis of variance and bootstrap methods for neural synchrony analysis

    Get PDF
    [Abstract] Background: Pairwise association between neurons is a key feature in understanding neural coding. Statistical neuroscience provides tools to estimate and assess these associations. In the mammalian brain, activating ascending pathways arise from neuronal nuclei located at the brainstem and at the basal forebrain that regulate the transition between sleep and awake neuronal firing modes in extensive regions of the cerebral cortex, including the primary visual cortex, where neurons are known to be selective for the orientation of a given stimulus. In this paper, the estimation of neural synchrony as a function of time is studied in data obtained from anesthetized cats. A functional data analysis of variance model is proposed. Bootstrap statistical tests are introduced in this context; they are useful tools for the study of differences in synchrony strength regarding 1) transition between different states (anesthesia and awake), and 2) affinity given by orientation selectivity. Results: An analysis of variance model for functional data is proposed for neural synchrony curves, estimated with a cross-correlation based method. Dependence arising from the experimental setting needs to be accounted for. Bootstrap tests allow the identification of differences between experimental conditions (modes of activity) and between pairs of neurons formed by cells with different affinities given by their preferred orientations. In our test case, interactions between experimental conditions and preferred orientations are not statistically significant. Conclusions: The results reflect the effect of different experimental conditions, as well as the affinity regarding orientation selectivity in neural synchrony and, therefore, in neural coding. A cross-correlation based method is proposed that works well under low firing activity. Functional data statistical tools produce results that are useful in this context. Dependence is shown to be necessary to account for, and bootstrap tests are an appropriate method with which to do so

    Cross nearest-spike interval based method to measure synchrony dynamics

    Get PDF
    [Abstract] A new synchrony index for neural activity is de ned in this paper. The method is able to measure synchrony dynamics in low ring rate scenarios. It is based on the computation of the time intervals between nearest spikes of two given spike trains. Generalized additive models are proposed for the synchrony pro les obtained by this method. Two hypothesis tests are proposed to assess for di erences in the level of synchronization in a real data example. Bootstrap methods are used to calibrate the distribution of the tests. Also, the expected synchrony due to chance is computed analytically and by simulation to assess for actual synchronization.Ministerio de Economía e Innovación; MTM2008-00166Ministerio de Economía e Innovación; MTM2011-22392Ministerio de Economía e Innovación; BES-2009-017772Galicia. Consellería de Economía e Industria; INCITE09 137 272 P

    A Semiparametric Bayesian Model for Detecting Synchrony Among Multiple Neurons

    Full text link
    We propose a scalable semiparametric Bayesian model to capture dependencies among multiple neurons by detecting their co-firing (possibly with some lag time) patterns over time. After discretizing time so there is at most one spike at each interval, the resulting sequence of 1's (spike) and 0's (silence) for each neuron is modeled using the logistic function of a continuous latent variable with a Gaussian process prior. For multiple neurons, the corresponding marginal distributions are coupled to their joint probability distribution using a parametric copula model. The advantages of our approach are as follows: the nonparametric component (i.e., the Gaussian process model) provides a flexible framework for modeling the underlying firing rates; the parametric component (i.e., the copula model) allows us to make inference regarding both contemporaneous and lagged relationships among neurons; using the copula model, we construct multivariate probabilistic models by separating the modeling of univariate marginal distributions from the modeling of dependence structure among variables; our method is easy to implement using a computationally efficient sampling algorithm that can be easily extended to high dimensional problems. Using simulated data, we show that our approach could correctly capture temporal dependencies in firing rates and identify synchronous neurons. We also apply our model to spike train data obtained from prefrontal cortical areas in rat's brain

    Can we identify non-stationary dynamics of trial-to-trial variability?"

    Get PDF
    Identifying sources of the apparent variability in non-stationary scenarios is a fundamental problem in many biological data analysis settings. For instance, neurophysiological responses to the same task often vary from each repetition of the same experiment (trial) to the next. The origin and functional role of this observed variability is one of the fundamental questions in neuroscience. The nature of such trial-to-trial dynamics however remains largely elusive to current data analysis approaches. A range of strategies have been proposed in modalities such as electro-encephalography but gaining a fundamental insight into latent sources of trial-to-trial variability in neural recordings is still a major challenge. In this paper, we present a proof-of-concept study to the analysis of trial-to-trial variability dynamics founded on non-autonomous dynamical systems. At this initial stage, we evaluate the capacity of a simple statistic based on the behaviour of trajectories in classification settings, the trajectory coherence, in order to identify trial-to-trial dynamics. First, we derive the conditions leading to observable changes in datasets generated by a compact dynamical system (the Duffing equation). This canonical system plays the role of a ubiquitous model of non-stationary supervised classification problems. Second, we estimate the coherence of class-trajectories in empirically reconstructed space of system states. We show how this analysis can discern variations attributable to non-autonomous deterministic processes from stochastic fluctuations. The analyses are benchmarked using simulated and two different real datasets which have been shown to exhibit attractor dynamics. As an illustrative example, we focused on the analysis of the rat's frontal cortex ensemble dynamics during a decision-making task. Results suggest that, in line with recent hypotheses, rather than internal noise, it is the deterministic trend which most likely underlies the observed trial-to-trial variability. Thus, the empirical tool developed within this study potentially allows us to infer the source of variability in in-vivo neural recordings

    Attracting dynamics of frontal cortex ensembles during memory-guided decision-making.

    Get PDF
    A common theoretical view is that attractor-like properties of neuronal dynamics underlie cognitive processing. However, although often proposed theoretically, direct experimental support for the convergence of neural activity to stable population patterns as a signature of attracting states has been sparse so far, especially in higher cortical areas. Combining state space reconstruction theorems and statistical learning techniques, we were able to resolve details of anterior cingulate cortex (ACC) multiple single-unit activity (MSUA) ensemble dynamics during a higher cognitive task which were not accessible previously. The approach worked by constructing high-dimensional state spaces from delays of the original single-unit firing rate variables and the interactions among them, which were then statistically analyzed using kernel methods. We observed cognitive-epoch-specific neural ensemble states in ACC which were stable across many trials (in the sense of being predictive) and depended on behavioral performance. More interestingly, attracting properties of these cognitively defined ensemble states became apparent in high-dimensional expansions of the MSUA spaces due to a proper unfolding of the neural activity flow, with properties common across different animals. These results therefore suggest that ACC networks may process different subcomponents of higher cognitive tasks by transiting among different attracting states
    • …
    corecore