445 research outputs found

    The Topology and Geometry of Neural Representations

    Full text link
    A central question for neuroscience is how to characterize brain representations of perceptual and cognitive content. An ideal characterization should distinguish different functional regions with robustness to noise and idiosyncrasies of individual brains that do not correspond to computational differences. Previous studies have characterized brain representations by their representational geometry, which is defined by the representational dissimilarity matrix (RDM), a summary statistic that abstracts from the roles of individual neurons (or responses channels) and characterizes the discriminability of stimuli. Here we explore a further step of abstraction: from the geometry to the topology of brain representations. We propose topological representational similarity analysis (tRSA), an extension of representational similarity analysis (RSA) that uses a family of geo-topological summary statistics that generalizes the RDM to characterize the topology while de-emphasizing the geometry. We evaluate this new family of statistics in terms of the sensitivity and specificity for model selection using both simulations and functional MRI (fMRI) data. In the simulations, the ground truth is a data-generating layer representation in a neural network model and the models are the same and other layers in different model instances (trained from different random seeds). In fMRI, the ground truth is a visual area and the models are the same and other areas measured in different subjects. Results show that topology-sensitive characterizations of population codes are robust to noise and interindividual variability and maintain excellent sensitivity to the unique representational signatures of different neural network layers and brain regions.Comment: codes: https://github.com/doerlbh/TopologicalRS

    Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience

    Get PDF
    A fundamental challenge for systems neuroscience is to quantitatively relate its three major branches of research: brain-activity measurement, behavioral measurement, and computational modeling. Using measured brain-activity patterns to evaluate computational network models is complicated by the need to define the correspondency between the units of the model and the channels of the brain-activity data, e.g., single-cell recordings or voxels from functional magnetic resonance imaging (fMRI). Similar correspondency problems complicate relating activity patterns between different modalities of brain-activity measurement (e.g., fMRI and invasive or scalp electrophysiology), and between subjects and species. In order to bridge these divides, we suggest abstracting from the activity patterns themselves and computing representational dissimilarity matrices (RDMs), which characterize the information carried by a given representation in a brain or model. Building on a rich psychological and mathematical literature on similarity analysis, we propose a new experimental and data-analytical framework called representational similarity analysis (RSA), in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing RDMs. We demonstrate RSA by relating representations of visual objects as measured with fMRI in early visual cortex and the fusiform face area to computational models spanning a wide range of complexities. The RDMs are simultaneously related via second-level application of multidimensional scaling and tested using randomization and bootstrap techniques. We discuss the broad potential of RSA, including novel approaches to experimental design, and argue that these ideas, which have deep roots in psychology and neuroscience, will allow the integrated quantitative analysis of data from all three branches, thus contributing to a more unified systems neuroscience

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Cortico-hippocampal activations for high entropy visual stimulus: an fMRI perspective

    Get PDF
    We perceive the environment around us in order to act upon it. To gain the desirable outcome effectively, we not only need the incoming information to be processed efficiently but we also need to know how reliable this information is. How this uncertainty is extracted from the visual input and how is it represented in the brain are still open questions. The hippocampus reacts to different measures of uncertainty. Because it is strongly connected to different cortical and subcortical regions, the hippocampus has the resources to communicate such information to other brain regions involved in visual processing and other cognitive processes. In this thesis, we investigate the aspects of uncertainty to which the hippocampus reacts. Is it the uncertainty in the ongoing recognition attempt of a temporally unfolding stimulus or is it the low-level spatiotemporal entropy? To answer this question, we used a dynamic visual stimulus with varying spatial and spatiotemporal entropy. We used well-structured virtual tunnel videos and the corresponding phase-scrambled videos with matching local luminance and contrast per frame. We also included pixel scrambled videos with high spatial and spatiotemporal entropy in our stimulus set. Brain responses (fMRI images) from the participants were recorded while they watched these videos and performed an engaging but cognitively independent task. Using the General Linear Model (GLM), we modeled the brain responses corresponding to different video types and found that the early visual cortex and the hippocampus had a stronger response to videos with higher spatiotemporal entropy. Using independent component analysis, we further investigated which underlying networks were recruited in processing high entropy visual information. We also discovered how these networks might influence each other. We found two cortico-hippocampal networks involved in processing our stimulus videos. While one of them represented a general primary visual processing network, the other was activated strongly by the high entropy videos and deactivated by the well-structured virtual tunnel videos. We also found a hierarchy in the processing stream with information flowing from less stimulus-specific to more stimulus-specific networks

    cuneate spiking neural network learning to classify naturalistic texture stimuli under varying sensing conditions

    Get PDF
    Abstract We implemented a functional neuronal network that was able to learn and discriminate haptic features from biomimetic tactile sensor inputs using a two-layer spiking neuron model and homeostatic synaptic learning mechanism. The first order neuron model was used to emulate biological tactile afferents and the second order neuron model was used to emulate biological cuneate neurons. We have evaluated 10 naturalistic textures using a passive touch protocol, under varying sensing conditions. Tactile sensor data acquired with five textures under five sensing conditions were used for a synaptic learning process, to tune the synaptic weights between tactile afferents and cuneate neurons. Using post-learning synaptic weights, we evaluated the individual and population cuneate neuron responses by decoding across 10 stimuli, under varying sensing conditions. This resulted in a high decoding performance. We further validated the decoding performance across stimuli, irrespective of sensing velocities using a set of 25 cuneate neuron responses. This resulted in a median decoding performance of 96% across the set of cuneate neurons. Being able to learn and perform generalized discrimination across tactile stimuli, makes this functional spiking tactile system effective and suitable for further robotic applications

    Comparing primate’s ventral visual stream and the state-of-the-art deep convolutional neural networks for core object recognition

    Get PDF
    Our ability to recognize and categorize objects in our surroundings is a critical component of our cognitive processes. Despite the enormous variations in each object's appearance (Due to variations in object position, pose, scale, illumination, and the presence of visual clutter), primates are thought to be able to quickly and easily distinguish objects from among tens of thousands of possibilities. The primate's ventral visual stream is believed to support this view-invariant visual object recognition ability by untangling object identity manifolds. Convolutional Neural Networks (CNNs), inspired by the primate's visual system, have also shown remarkable performance in object recognition tasks. This review aims to explore and compare the mechanisms of object recognition in the primate's ventral visual stream and state-of-the-art deep CNNs. The research questions address the extent to which CNNs have approached human-level object recognition and how their performance compares to the primate ventral visual stream. The objectives include providing an overview of the literature on the ventral visual stream and CNNs, comparing their mechanisms, and identifying strengths and limitations for core object recognition. The review is structured to present the ventral visual stream's structure, visual representations, and the process of untangling object manifolds. It also covers the architecture of CNNs. The review also compared the two visual systems and the results showed that deep CNNs have shown remarkable performance and capability in certain aspects of object recognition, but there are still limitations in replicating the complexities of the primate visual system. Further research is needed to bridge the gap between computational models and the intricate neural mechanisms underlying human object recognition.Our ability to recognize and categorize objects in our surroundings is a critical component of our cognitive processes. Despite the enormous variations in each object's appearance (Due to variations in object position, pose, scale, illumination, and the presence of visual clutter), primates are thought to be able to quickly and easily distinguish objects from among tens of thousands of possibilities. The primate's ventral visual stream is believed to support this view-invariant visual object recognition ability by untangling object identity manifolds. Convolutional Neural Networks (CNNs), inspired by the primate's visual system, have also shown remarkable performance in object recognition tasks. This review aims to explore and compare the mechanisms of object recognition in the primate's ventral visual stream and state-of-the-art deep CNNs. The research questions address the extent to which CNNs have approached human-level object recognition and how their performance compares to the primate ventral visual stream. The objectives include providing an overview of the literature on the ventral visual stream and CNNs, comparing their mechanisms, and identifying strengths and limitations for core object recognition. The review is structured to present the ventral visual stream's structure, visual representations, and the process of untangling object manifolds. It also covers the architecture of CNNs. The review also compared the two visual systems and the results showed that deep CNNs have shown remarkable performance and capability in certain aspects of object recognition, but there are still limitations in replicating the complexities of the primate visual system. Further research is needed to bridge the gap between computational models and the intricate neural mechanisms underlying human object recognition

    Renewing the respect for similarity

    Get PDF
    In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemming from its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problem at hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, by surveying established results and new developments in the theory and methods of similarity-preserving associative lookup and dimensionality reduction—critical components of many cognitive functions, as well as of intelligent data management in computer vision. We focus in particular on the growing family of algorithms that support associative memory by performing hashing that respects local similarity, and on the uses of similarity in representing structured objects and scenes. Insofar as these similarity-based ideas and methods are useful in cognitive modeling and in AI applications, they should be included in the core conceptual toolkit of computational neuroscience. In support of this stance, the present paper (1) offers a discussion of conceptual, mathematical, computational, and empirical aspects of similarity, as applied to the problems of visual object and scene representation, recognition, and interpretation, (2) mentions some key computational problems arising in attempts to put similarity to use, along with their possible solutions, (3) briefly states a previously developed similarity-based framework for visual object representation, the Chorus of Prototypes, along with the empirical support it enjoys, (4) presents new mathematical insights into the effectiveness of this framework, derived from its relationship to locality-sensitive hashing (LSH) and to concomitant statistics, (5) introduces a new model, the Chorus of Relational Descriptors (ChoRD), that extends this framework to scene representation and interpretation, (6) describes its implementation and testing, and finally (7) suggests possible directions in which the present research program can be extended in the future
    corecore