135 research outputs found

    Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function

    Get PDF
    A recent publication provides the network graph for a neocortical microcircuit comprising 8 million connections between 31,000 neurons (H. Markram, et al., Reconstruction and simulation of neocortical microcircuitry, Cell, 163 (2015) no. 2, 456-492). Since traditional graph-theoretical methods may not be sufficient to understand the immense complexity of such a biological network, we explored whether methods from algebraic topology could provide a new perspective on its structural and functional organization. Structural topological analysis revealed that directed graphs representing connectivity among neurons in the microcircuit deviated significantly from different varieties of randomized graph. In particular, the directed graphs contained in the order of 10710^7 simplices {\DH} groups of neurons with all-to-all directed connectivity. Some of these simplices contained up to 8 neurons, making them the most extreme neuronal clustering motif ever reported. Functional topological analysis of simulated neuronal activity in the microcircuit revealed novel spatio-temporal metrics that provide an effective classification of functional responses to qualitatively different stimuli. This study represents the first algebraic topological analysis of structural connectomics and connectomics-based spatio-temporal activity in a biologically realistic neural microcircuit. The methods used in the study show promise for more general applications in network science

    “Brainland” vs. “flatland”: How many dimensions do we need in brain dynamics?: Comment on the paper “The unreasonable effectiveness of small neural ensembles in high-dimensional brain” by Alexander N. Gorban et al.

    Get PDF
    In their review article (this issue) [1], Gorban, Makarov and Tyukin develop a successful effort to show in biological, physical and mathematical problems the relevant question of how high-dimensional brain can organise reliable and fast learning in the high-dimensional world of data using reduction tools. In fact, this paper, and several recent studies, focuses on the crucial problem of how the brain manages the information it receives, how it is organized, and how mathematics can learn about this and use dimension related techniques in other fields. Moreover, the opposite problem is also relevant, that is, how we can recover high-dimensional information from low-dimensional ones, the relevant problem of embedding dimensions (the other side of reducing dimensions). The human brain is a real open problem and a great challenge in human knowledge. The way the memory is codified is a fundamental problem in Neuroscience. As mentioned by the authors, the idea of blessing the dimensionality (and the opposite curse of dimensionality), are becoming more and more relevant in machine learning..

    Topological exploration of artificial neuronal network dynamics

    Full text link
    One of the paramount challenges in neuroscience is to understand the dynamics of individual neurons and how they give rise to network dynamics when interconnected. Historically, researchers have resorted to graph theory, statistics, and statistical mechanics to describe the spatiotemporal structure of such network dynamics. Our novel approach employs tools from algebraic topology to characterize the global properties of network structure and dynamics. We propose a method based on persistent homology to automatically classify network dynamics using topological features of spaces built from various spike-train distances. We investigate the efficacy of our method by simulating activity in three small artificial neural networks with different sets of parameters, giving rise to dynamics that can be classified into four regimes. We then compute three measures of spike train similarity and use persistent homology to extract topological features that are fundamentally different from those used in traditional methods. Our results show that a machine learning classifier trained on these features can accurately predict the regime of the network it was trained on and also generalize to other networks that were not presented during training. Moreover, we demonstrate that using features extracted from multiple spike-train distances systematically improves the performance of our method

    Evaluating performance of neural codes in neural communication networks

    Get PDF
    Information needs to be appropriately encoded to be reliably transmitted over a physical media. Similarly, neurons have their own codes to convey information in the brain. Even though it is well-know that neurons exchange information using a pool of several protocols of spatial-temporal encodings, the suitability of each code and their performance as a function of the network parameters and external stimuli is still one of the great mysteries in Neuroscience. This paper sheds light into this problem considering small networks of chemically and electrically coupled Hindmarsh-Rose spiking neurons. We focus on the mathematical fundamental aspects of a class of temporal and firing-rate codes that result from the neurons' action-potentials and phases, and quantify their performance by measuring the Mutual Information Rate, aka the rate of information exchange. A particularly interesting result regards the performance of the codes with respect to the way neurons are connected. We show that pairs of neurons that have the largest rate of information exchange using the interspike interval and firing-rate codes are not adjacent in the network, whereas the spiking-time and phase codes promote large exchange of information rate from adjacent neurons. This result, if possible to extend to larger neural networks, would suggest that small microcircuits of fully connected neurons, also known as cliques, would preferably exchange information using temporal codes (spiking-time and phase codes), whereas on the macroscopic scale, where typically there will be pairs of neurons that are not directly connected due to the brain's sparsity, the most efficient codes would be the firing rate and interspike interval codes, with the latter being closely related to the firing rate code
    • 

    corecore