17,389 research outputs found

    A roadmap to integrate astrocytes into Systems Neuroscience.

    Get PDF
    Systems neuroscience is still mainly a neuronal field, despite the plethora of evidence supporting the fact that astrocytes modulate local neural circuits, networks, and complex behaviors. In this article, we sought to identify which types of studies are necessary to establish whether astrocytes, beyond their well-documented homeostatic and metabolic functions, perform computations implementing mathematical algorithms that sub-serve coding and higher-brain functions. First, we reviewed Systems-like studies that include astrocytes in order to identify computational operations that these cells may perform, using Ca2+ transients as their encoding language. The analysis suggests that astrocytes may carry out canonical computations in a time scale of subseconds to seconds in sensory processing, neuromodulation, brain state, memory formation, fear, and complex homeostatic reflexes. Next, we propose a list of actions to gain insight into the outstanding question of which variables are encoded by such computations. The application of statistical analyses based on machine learning, such as dimensionality reduction and decoding in the context of complex behaviors, combined with connectomics of astrocyte-neuronal circuits, is, in our view, fundamental undertakings. We also discuss technical and analytical approaches to study neuronal and astrocytic populations simultaneously, and the inclusion of astrocytes in advanced modeling of neural circuits, as well as in theories currently under exploration such as predictive coding and energy-efficient coding. Clarifying the relationship between astrocytic Ca2+ and brain coding may represent a leap forward toward novel approaches in the study of astrocytes in health and disease

    An Efficient Coding Theory for a Dynamic Trajectory Predicts non-Uniform Allocation of Grid Cells to Modules in the Entorhinal Cortex

    Full text link
    Grid cells in the entorhinal cortex encode the position of an animal in its environment using spatially periodic tuning curves of varying periodicity. Recent experiments established that these cells are functionally organized in discrete modules with uniform grid spacing. Here we develop a theory for efficient coding of position, which takes into account the temporal statistics of the animal's motion. The theory predicts a sharp decrease of module population sizes with grid spacing, in agreement with the trends seen in the experimental data. We identify a simple scheme for readout of the grid cell code by neural circuitry, that can match in accuracy the optimal Bayesian decoder of the spikes. This readout scheme requires persistence over varying timescales, ranging from ~1ms to ~1s, depending on the grid cell module. Our results suggest that the brain employs an efficient representation of position which takes advantage of the spatiotemporal statistics of the encoded variable, in similarity to the principles that govern early sensory coding.Comment: 23 pages, 5 figures. Supplemental Information available from the authors on request. A previous version of this work appeared in abstract form (Program No. 727.02. 2015 Neuroscience Meeting Planner. Chicago, IL: Society for Neuroscience, 2015. Online.

    Seven properties of self-organization in the human brain

    Get PDF
    The principle of self-organization has acquired a fundamental significance in the newly emerging field of computational philosophy. Self-organizing systems have been described in various domains in science and philosophy including physics, neuroscience, biology and medicine, ecology, and sociology. While system architecture and their general purpose may depend on domain-specific concepts and definitions, there are (at least) seven key properties of self-organization clearly identified in brain systems: 1) modular connectivity, 2) unsupervised learning, 3) adaptive ability, 4) functional resiliency, 5) functional plasticity, 6) from-local-to-global functional organization, and 7) dynamic system growth. These are defined here in the light of insight from neurobiology, cognitive neuroscience and Adaptive Resonance Theory (ART), and physics to show that self-organization achieves stability and functional plasticity while minimizing structural system complexity. A specific example informed by empirical research is discussed to illustrate how modularity, adaptive learning, and dynamic network growth enable stable yet plastic somatosensory representation for human grip force control. Implications for the design of “strong” artificial intelligence in robotics are brought forward
    • …
    corecore