3,164 research outputs found

    Severe Scene Learning Impairment, but Intact Recognition Memory, after Cholinergic Depletion of Inferotemporal Cortex Followed by Fornix Transection

    Get PDF
    To examine the generality of cholinergic involvement in visual memory in primates, we trained macaque monkeys either on an object-in-place scene learning task or in delayed nonmatching-to-sample (DNMS). Each monkey received either selective cholinergic depletion of inferotemporal cortex (including the entorhinal cortex and perirhinal cortex) with injections of the immunotoxin ME20.4-saporin or saline injections as a control and was postoperatively retested. Cholinergic depletion of inferotemporal cortex was without effect on either task. Each monkey then received fornix transection because previous studies have shown that multiple disconnections of temporal cortex can produce synergistic impairments in memory. Fornix transection mildly impaired scene learning in monkeys that had received saline injections but severely impaired scene learning in monkeys that had received cholinergic lesions of inferotemporal cortex. This synergistic effect was not seen in monkeys performing DNMS. These findings confirm a synergistic interaction in a macaque monkey model of episodic memory between connections carried by the fornix and cholinergic input to the inferotemporal cortex. They support the notion that the mnemonic functions tapped by scene learning and DNMS have dissociable neural substrates. Finally, cholinergic depletion of inferotemporal cortex, in this study, appears insufficient to impair memory functions dependent on an intact inferotemporal cortex

    Fine-Scale Spatial Organization of Face and Object Selectivity in the Temporal Lobe: Do Functional Magnetic Resonance Imaging, Optical Imaging, and Electrophysiology Agree?

    Get PDF
    The spatial organization of the brain's object and face representations in the temporal lobe is critical for understanding high-level vision and cognition but is poorly understood. Recently, exciting progress has been made using advanced imaging and physiology methods in humans and nonhuman primates, and the combination of such methods may be particularly powerful. Studies applying these methods help us to understand how neuronal activity, optical imaging, and functional magnetic resonance imaging signals are related within the temporal lobe, and to uncover the fine-grained and large-scale spatial organization of object and face representations in the primate brain

    Latency and Selectivity of Single Neurons Indicate Hierarchical Processing in the Human Medial Temporal Lobe

    Get PDF
    Neurons in the temporal lobe of both monkeys and humans show selective responses to classes of visual stimuli and even to specific individuals. In this study, we investigate the latency and selectivity of visually responsive neurons recorded from microelectrodes in the parahippocampal cortex, entorhinal cortex, hippocampus, and amygdala of human subjects during a visual object presentation task. During 96 experimental sessions in 35 subjects, we recorded from a total of 3278 neurons. Of these units, 398 responded selectively to one or more of the presented stimuli. Mean response latencies were substantially larger than those reported in monkeys. We observed a highly significant correlation between the latency and the selectivity of these neurons: the longer the latency the greater the selectivity. Particularly, parahippocampal neurons were found to respond significantly earlier and less selectively than those in the other three regions. Regional analysis showed significant correlations between latency and selectivity within the parahippocampal cortex, entorhinal cortex, and hippocampus, but not within the amygdala. The later and more selective responses tended to be generated by cells with sparse baseline firing rates and vice versa. Our results provide direct evidence for hierarchical processing of sensory information at the interface between the visual pathway and the limbic system, by which increasingly refined and specific representations of stimulus identity are generated over time along the anatomic pathways of the medial temporal lobe

    Learning alters theta-nested gamma oscillations in inferotemporal cortex

    Get PDF
    How coupled brain rhythms influence cortical information processing to support learning is unresolved. Local field potential and neuronal activity recordings from 64- electrode arrays in sheep inferotemporal cortex showed that visual discrimination learning increased the amplitude of theta oscillations during stimulus presentation. Coupling between theta and gamma oscillations, the theta/gamma ratio and the regularity of theta phase were also increased, but not neuronal firing rates. A neural network model with fast and slow inhibitory interneurons was developed which generated theta nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity similar learning-evoked changes could be produced. The model revealed that altered theta nested gamma could potentiate downstream neuron responses by temporal desynchronization of excitatory neuron output independent of changes in overall firing frequency. This learning-associated desynchronization was also exhibited by inferotemporal cortex neurons. Changes in theta nested gamma may therefore facilitate learning-associated potentiation by temporal modulation of neuronal firing

    Ventral-stream-like shape representation : from pixel intensity values to trainable object-selective COSFIRE models

    Get PDF
    Keywords: hierarchical representation, object recognition, shape, ventral stream, vision and scene understanding, robotics, handwriting analysisThe remarkable abilities of the primate visual system have inspired the construction of computational models of some visual neurons. We propose a trainable hierarchical object recognition model, which we call S-COSFIRE (S stands for Shape and COSFIRE stands for Combination Of Shifted FIlter REsponses) and use it to localize and recognize objects of interests embedded in complex scenes. It is inspired by the visual processing in the ventral stream (V1/V2 → V4 → TEO). Recognition and localization of objects embedded in complex scenes is important for many computer vision applications. Most existing methods require prior segmentation of the objects from the background which on its turn requires recognition. An S-COSFIRE filter is automatically configured to be selective for an arrangement of contour-based features that belong to a prototype shape specified by an example. The configuration comprises selecting relevant vertex detectors and determining certain blur and shift parameters. The response is computed as the weighted geometric mean of the blurred and shifted responses of the selected vertex detectors. S-COSFIRE filters share similar properties with some neurons in inferotemporal cortex, which provided inspiration for this work. We demonstrate the effectiveness of S-COSFIRE filters in two applications: letter and keyword spotting in handwritten manuscripts and object spotting in complex scenes for the computer vision system of a domestic robot. S-COSFIRE filters are effective to recognize and localize (deformable) objects in images of complex scenes without requiring prior segmentation. They are versatile trainable shape detectors, conceptually simple and easy to implement. The presented hierarchical shape representation contributes to a better understanding of the brain and to more robust computer vision algorithms.peer-reviewe

    From Stereogram to Surface: How the Brain Sees the World in Depth

    Full text link
    When we look at a scene, how do we consciously see surfaces infused with lightness and color at the correct depths? Random Dot Stereograms (RDS) probe how binocular disparity between the two eyes can generate such conscious surface percepts. Dense RDS do so despite the fact that they include multiple false binocular matches. Sparse stereograms do so even across large contrast-free regions with no binocular matches. Stereograms that define occluding and occluded surfaces lead to surface percepts wherein partially occluded textured surfaces are completed behind occluding textured surfaces at a spatial scale much larger than that of the texture elements themselves. Earlier models suggest how the brain detects binocular disparity, but not how RDS generate conscious percepts of 3D surfaces. A neural model predicts how the layered circuits of visual cortex generate these 3D surface percepts using interactions between visual boundary and surface representations that obey complementary computational rules.Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (EIA-01-30851, SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Brain Learning and Recognition: The Large and the Small of It in Inferotemporal Cortex

    Full text link
    Anterior inferotemporal cortex (ITa) plays a key role in visual object recognition. Recognition is tolerant to object position, size, and view changes, yet recent neurophysiological data show ITa cells with high object selectivity often have low position tolerance, and vice versa. A neural model learns to simulate both this tradeoff and ITa responses to image morphs using large-scale and small-scale IT cells whose population properties may support invariant recognition.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Sparse visual models for biologically inspired sensorimotor control

    Get PDF
    Given the importance of using resources efficiently in the competition for survival, it is reasonable to think that natural evolution has discovered efficient cortical coding strategies for representing natural visual information. Sparse representations have intrinsic advantages in terms of fault-tolerance and low-power consumption potential, and can therefore be attractive for robot sensorimotor control with powerful dispositions for decision-making. Inspired by the mammalian brain and its visual ventral pathway, we present in this paper a hierarchical sparse coding network architecture that extracts visual features for use in sensorimotor control. Testing with natural images demonstrates that this sparse coding facilitates processing and learning in subsequent layers. Previous studies have shown how the responses of complex cells could be sparsely represented by a higher-order neural layer. Here we extend sparse coding in each network layer, showing that detailed modeling of earlier stages in the visual pathway enhances the characteristics of the receptive fields developed in subsequent stages. The yield network is more dynamic with richer and more biologically plausible input and output representation

    Visualizing classification of natural video sequences using sparse, hierarchical models of cortex.

    Get PDF
    Recent work on hierarchical models of visual cortex has reported state-of-the-art accuracy on whole-scene labeling using natural still imagery. This raises the question of whether the reported accuracy may be due to the sophisticated, non-biological back-end supervised classifiers typically used (support vector machines) and/or the limited number of images used in these experiments. In particular, is the model classifying features from the object or the background? Previous work (Landecker, Brumby, et al., COSYNE 2010) proposed tracing the spatial support of a classifier’s decision back through a hierarchical cortical model to determine which parts of the image contributed to the classification, compared to the positions of objects in the scene. In this way, we can go beyond standard measures of accuracy to provide tools for visualizing and analyzing high-level object classification. We now describe new work exploring the extension of these ideas to detection of objects in video sequences of natural scenes
    corecore