4,992 research outputs found
A roadmap to integrate astrocytes into Systems Neuroscience.
Systems neuroscience is still mainly a neuronal field, despite the plethora of evidence supporting the fact that astrocytes modulate local neural circuits, networks, and complex behaviors. In this article, we sought to identify which types of studies are necessary to establish whether astrocytes, beyond their well-documented homeostatic and metabolic functions, perform computations implementing mathematical algorithms that sub-serve coding and higher-brain functions. First, we reviewed Systems-like studies that include astrocytes in order to identify computational operations that these cells may perform, using Ca2+ transients as their encoding language. The analysis suggests that astrocytes may carry out canonical computations in a time scale of subseconds to seconds in sensory processing, neuromodulation, brain state, memory formation, fear, and complex homeostatic reflexes. Next, we propose a list of actions to gain insight into the outstanding question of which variables are encoded by such computations. The application of statistical analyses based on machine learning, such as dimensionality reduction and decoding in the context of complex behaviors, combined with connectomics of astrocyte-neuronal circuits, is, in our view, fundamental undertakings. We also discuss technical and analytical approaches to study neuronal and astrocytic populations simultaneously, and the inclusion of astrocytes in advanced modeling of neural circuits, as well as in theories currently under exploration such as predictive coding and energy-efficient coding. Clarifying the relationship between astrocytic Ca2+ and brain coding may represent a leap forward toward novel approaches in the study of astrocytes in health and disease
Visualizing probabilistic models: Intensive Principal Component Analysis
Unsupervised learning makes manifest the underlying structure of data without
curated training and specific problem definitions. However, the inference of
relationships between data points is frustrated by the `curse of
dimensionality' in high-dimensions. Inspired by replica theory from statistical
mechanics, we consider replicas of the system to tune the dimensionality and
take the limit as the number of replicas goes to zero. The result is the
intensive embedding, which is not only isometric (preserving local distances)
but allows global structure to be more transparently visualized. We develop the
Intensive Principal Component Analysis (InPCA) and demonstrate clear
improvements in visualizations of the Ising model of magnetic spins, a neural
network, and the dark energy cold dark matter ({\Lambda}CDM) model as applied
to the Cosmic Microwave Background.Comment: 6 pages, 5 figure
Neural manifold analysis of brain circuit dynamics in health and disease
Recent developments in experimental neuroscience make it possible to simultaneously record the activity of thousands of neurons. However, the development of analysis approaches for such large-scale neural recordings have been slower than those applicable to single-cell experiments. One approach that has gained recent popularity is neural manifold learning. This approach takes advantage of the fact that often, even though neural datasets may be very high dimensional, the dynamics of neural activity tends to traverse a much lower-dimensional space. The topological structures formed by these low-dimensional neural subspaces are referred to as “neural manifolds”, and may potentially provide insight linking neural circuit dynamics with cognitive function and behavioral performance. In this paper we review a number of linear and non-linear approaches to neural manifold learning, including principal component analysis (PCA), multi-dimensional scaling (MDS), Isomap, locally linear embedding (LLE), Laplacian eigenmaps (LEM), t-SNE, and uniform manifold approximation and projection (UMAP). We outline these methods under a common mathematical nomenclature, and compare their advantages and disadvantages with respect to their use for neural data analysis. We apply them to a number of datasets from published literature, comparing the manifolds that result from their application to hippocampal place cells, motor cortical neurons during a reaching task, and prefrontal cortical neurons during a multi-behavior task. We find that in many circumstances linear algorithms produce similar results to non-linear methods, although in particular cases where the behavioral complexity is greater, non-linear methods tend to find lower-dimensional manifolds, at the possible expense of interpretability. We demonstrate that these methods are applicable to the study of neurological disorders through simulation of a mouse model of Alzheimer’s Disease, and speculate that neural manifold analysis may help us to understand the circuit-level consequences of molecular and cellular neuropathology
VIOLA - A multi-purpose and web-based visualization tool for neuronal-network simulation output
Neuronal network models and corresponding computer simulations are invaluable
tools to aid the interpretation of the relationship between neuron properties,
connectivity and measured activity in cortical tissue. Spatiotemporal patterns
of activity propagating across the cortical surface as observed experimentally
can for example be described by neuronal network models with layered geometry
and distance-dependent connectivity. The interpretation of the resulting stream
of multi-modal and multi-dimensional simulation data calls for integrating
interactive visualization steps into existing simulation-analysis workflows.
Here, we present a set of interactive visualization concepts called views for
the visual analysis of activity data in topological network models, and a
corresponding reference implementation VIOLA (VIsualization Of Layer Activity).
The software is a lightweight, open-source, web-based and platform-independent
application combining and adapting modern interactive visualization paradigms,
such as coordinated multiple views, for massively parallel neurophysiological
data. For a use-case demonstration we consider spiking activity data of a
two-population, layered point-neuron network model subject to a spatially
confined excitation originating from an external population. With the multiple
coordinated views, an explorative and qualitative assessment of the
spatiotemporal features of neuronal activity can be performed upfront of a
detailed quantitative data analysis of specific aspects of the data.
Furthermore, ongoing efforts including the European Human Brain Project aim at
providing online user portals for integrated model development, simulation,
analysis and provenance tracking, wherein interactive visual analysis tools are
one component. Browser-compatible, web-technology based solutions are therefore
required. Within this scope, with VIOLA we provide a first prototype.Comment: 38 pages, 10 figures, 3 table
Universal Organization of Resting Brain Activity at the Thermodynamic Critical Point
Thermodynamic criticality describes emergent phenomena in a wide variety of
complex systems. In the mammalian brain, the complex dynamics that
spontaneously emerge from neuronal interactions have been characterized as
neuronal avalanches, a form of critical branching dynamics. Here, we show that
neuronal avalanches also reflect that the brain dynamics are organized close to
a thermodynamic critical point. We recorded spontaneous cortical activity in
monkeys and humans at rest using high-density intracranial microelectrode
arrays and magnetoencephalography, respectively. By numerically changing a
control parameter equivalent to thermodynamic temperature, we observed typical
critical behavior in cortical activities near the actual physiological
condition, including the phase transition of an order parameter, as well as the
divergence of susceptibility and specific heat. Finite-size scaling of these
quantities allowed us to derive robust critical exponents highly consistent
across monkey and humans that uncover a distinct, yet universal organization of
brain dynamics
Searching for collective behavior in a network of real neurons
Maximum entropy models are the least structured probability distributions
that exactly reproduce a chosen set of statistics measured in an interacting
network. Here we use this principle to construct probabilistic models which
describe the correlated spiking activity of populations of up to 120 neurons in
the salamander retina as it responds to natural movies. Already in groups as
small as 10 neurons, interactions between spikes can no longer be regarded as
small perturbations in an otherwise independent system; for 40 or more neurons
pairwise interactions need to be supplemented by a global interaction that
controls the distribution of synchrony in the population. Here we show that
such "K-pairwise" models--being systematic extensions of the previously used
pairwise Ising models--provide an excellent account of the data. We explore the
properties of the neural vocabulary by: 1) estimating its entropy, which
constrains the population's capacity to represent visual information; 2)
classifying activity patterns into a small set of metastable collective modes;
3) showing that the neural codeword ensembles are extremely inhomogenous; 4)
demonstrating that the state of individual neurons is highly predictable from
the rest of the population, allowing the capacity for error correction.Comment: 24 pages, 19 figure
Incremental Mutual Information: A New Method for Characterizing the Strength and Dynamics of Connections in Neuronal Circuits
Understanding the computations performed by neuronal circuits requires characterizing the strength and dynamics of the connections between individual neurons. This characterization is typically achieved by measuring the correlation in the activity of two neurons. We have developed a new measure for studying connectivity in neuronal circuits based on information theory, the incremental mutual information (IMI). By conditioning out the temporal dependencies in the responses of individual neurons before measuring the dependency between them, IMI improves on standard correlation-based measures in several important ways: 1) it has the potential to disambiguate statistical dependencies that reflect the connection between neurons from those caused by other sources (e. g. shared inputs or intrinsic cellular or network mechanisms) provided that the dependencies have appropriate timescales, 2) for the study of early sensory systems, it does not require responses to repeated trials of identical stimulation, and 3) it does not assume that the connection between neurons is linear. We describe the theory and implementation of IMI in detail and demonstrate its utility on experimental recordings from the primate visual system
- …