2 research outputs found

    A Maximum Entropy Test for Evaluating Higher-Order Correlations in Spike Counts

    Get PDF
    Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly

    Stability of spontaneous, correlated activity in mouse auditory cortex

    Full text link
    Neural systems can be modeled as networks of functionally connected neural elements. The resulting network can be analyzed using mathematical tools from network science and graph theory to quantify the system's topological organization and to better understand its function. While the network-based approach is common in the analysis of large-scale neural systems probed by non-invasive neuroimaging, few studies have used network science to study the organization of networks reconstructed at the cellular level, and thus many very basic and fundamental questions remain unanswered. Here, we used two-photon calcium imaging to record spontaneous activity from the same set of cells in mouse auditory cortex over the course of several weeks. We reconstruct functional networks in which cells are linked to one another by edges weighted according to the correlation of their fluorescence traces. We show that the networks exhibit modular structure across multiple topological scales and that these multi-scale modules unfold as part of a hierarchy. We also show that, on average, network architecture becomes increasingly dissimilar over time, with similarity decaying monotonically with the distance (in time) between sessions. Finally, we show that a small fraction of cells maintain strongly-correlated activity over multiple days, forming a stable temporal core surrounded by a fluctuating and variable periphery. Our work provides a careful methodological blueprint for future studies of spontaneous activity measured by two-photon calcium imaging using cutting-edge computational methods and machine learning algorithms informed by explicit graphical models from network science. The methods are easily extended to additional datasets, opening the possibility of studying cellular level network organization of neural systems and how that organization is modulated by stimuli or altered in models of disease.Comment: 15 pages, 3 figure
    corecore