1,773 research outputs found

    Neural population coding: combining insights from microscopic and mass signals

    Get PDF
    Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior

    Nonlinear brain dynamics as macroscopic manifestation of underlying many-body field dynamics

    Full text link
    Neural activity patterns related to behavior occur at many scales in time and space from the atomic and molecular to the whole brain. Here we explore the feasibility of interpreting neurophysiological data in the context of many-body physics by using tools that physicists have devised to analyze comparable hierarchies in other fields of science. We focus on a mesoscopic level that offers a multi-step pathway between the microscopic functions of neurons and the macroscopic functions of brain systems revealed by hemodynamic imaging. We use electroencephalographic (EEG) records collected from high-density electrode arrays fixed on the epidural surfaces of primary sensory and limbic areas in rabbits and cats trained to discriminate conditioned stimuli (CS) in the various modalities. High temporal resolution of EEG signals with the Hilbert transform gives evidence for diverse intermittent spatial patterns of amplitude (AM) and phase modulations (PM) of carrier waves that repeatedly re-synchronize in the beta and gamma ranges at near zero time lags over long distances. The dominant mechanism for neural interactions by axodendritic synaptic transmission should impose distance-dependent delays on the EEG oscillations owing to finite propagation velocities. It does not. EEGs instead show evidence for anomalous dispersion: the existence in neural populations of a low velocity range of information and energy transfers, and a high velocity range of the spread of phase transitions. This distinction labels the phenomenon but does not explain it. In this report we explore the analysis of these phenomena using concepts of energy dissipation, the maintenance by cortex of multiple ground states corresponding to AM patterns, and the exclusive selection by spontaneous breakdown of symmetry (SBS) of single states in sequences.Comment: 31 page

    Oscillations, metastability and phase transitions in brain and models of cognition

    Get PDF
    Neuroscience is being practiced in many different forms and at many different organizational levels of the Nervous System. Which of these levels and associated conceptual frameworks is most informative for elucidating the association of neural processes with processes of Cognition is an empirical question and subject to pragmatic validation. In this essay, I select the framework of Dynamic System Theory. Several investigators have applied in recent years tools and concepts of this theory to interpretation of observational data, and for designing neuronal models of cognitive functions. I will first trace the essentials of conceptual development and hypotheses separately for discerning observational tests and criteria for functional realism and conceptual plausibility of the alternatives they offer. I will then show that the statistical mechanics of phase transitions in brain activity, and some of its models, provides a new and possibly revealing perspective on brain events in cognition

    Retinal ganglion cells and the magnocellular, parvocellular, and koniocellular subcortical visual pathways from the eye to the brain

    Get PDF
    In primates including humans, most retinal ganglion cells send signals to the lateral geniculate nucleus (LGN) of the thalamus. The anatomical and functional properties of the two major pathways through the LGN, the parvocellular (P) and magnocellular (M) pathways, are now well understood. Neurones in these pathways appear to convey a filtered version of the retinal image to primary visual cortex for further analysis. The properties of the P-pathway suggest it is important for high spatial acuity and red-green color vision, while those of the M-pathway suggest it is important for achromatic visual sensitivity and motion vision. Recent work has sharpened our understanding of how these properties are built in the retina, and described subtle but important nonlinearities that shape the signals that cortex receives. In addition to the P- and M-pathways, other retinal ganglion cells also project to the LGN. These ganglion cells are larger than those in the P- and M-pathways, have different retinal connectivity, and project to distinct regions of the LGN, together forming heterogenous koniocellular (K) pathways. Recent work has started to reveal the properties of these K-pathways, in the retina and in the LGN. The functional properties of K-pathways are more complex than those in the P- and M-pathways, and the K-pathways are likely to have a distinct contribution to vision. They provide a complementary pathway to the primary visual cortex, but can also send signals directly to extrastriate visual cortex. At the level of the LGN, many neurones in the K-pathways seem to integrate retinal with non-retinal inputs, and some may provide an early site of binocular convergence

    Intrinsic dimensionality in vision: Nonlinear filter design and applications

    Get PDF
    Biological vision and computer vision cannot be treated independently anymore. The digital revolution and the emergence of more and more sophisticated technical applications caused a symbiosis between the two communities. Competitive technical devices challenging the human performance rely increasingly on algorithms motivated by the human vision system. On the other hand, computational methods can be used to gain a richer understanding of neural behavior, e.g. the behavior of populations of multiple processing units. The relations between computational approaches and biological findings range from low level vision to cortical areas being responsible for higher cognitive abilities. In early stages of the visual cortex cells have been recorded which could not be explained by the standard approach of orientation- and frequency-selective linear filters anymore. These cells did not respond to straight lines or simple gratings but they fired whenever a more complicated stimulus, like a corner or an end-stopped line, was presented within the receptive field. Using the concept of intrinsic dimensionality, these cells can be classified as intrinsic-two-dimensional systems. The intrinsic dimensionality determines the number of degrees of freedom in the domain which is required to completely determine a signal. A constant image has dimension zero, straight lines and trigonometric functions in one direction have dimension one, and the remaining signals, which require the full number of degrees of freedom, have the dimension two. In this term the reported cells respond to two dimensional signals only. Motivated by the classical approach, which can be realized by orientation- and frequency-selective Gabor-filter functions, a generalized Gabor framework is developed in the context of second-order Volterra systems. The generalized Gabor approach is then used to design intrinsic two-dimensional systems which have the same selectivity properties like the reported cells in early visual cortex. Numerical cognition is commonly assumed to be a higher cognitive ability of humans. The estimation of the number of things from the environment requires a high degree of abstraction. Several studies showed that humans and other species have access to this abstract information. But it is still unclear how this information can be extracted by neural hardware. If one wants to deal with this issue, one has to think about the immense invariance property of number. One can apply a high number of operations to objects which do not change its number. In this work, this problem is considered from a topological perspective. Well known relations between differential geometry and topology are used to develop a computational model. Surprisingly, the resulting operators providing the features which are integrated in the system are intrinsic-two-dimensional operators. This model is used to conduct standard number estimation experiments. The results are then compared to reported human behavior. The last topic of this work is active object recognition. The ability to move the information gathering device, like humans can move their eyes, provides the opportunity to choose the next action. Studies of human saccade behavior suggest that this is not done in a random manner. In order to decrease the time an active object recognition system needs to reach a certain level of performance, several action selection strategies are investigated. The strategies considered within this work are based on information theoretical and probabilistic concepts. These strategies are finally compared to a strategy based on an intrinsic-two-dimensional operator. All three topics are investigated with respect to their relation to the concept of intrinsic dimensionality from a mathematical point of view

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF
    • …
    corecore