8 research outputs found

    Contextual-Dependent Attention Effect on Crowded Orientation Signals in Human Visual Cortex

    Get PDF
    A target becomes hard to identify with nearby visual stimuli. This phenomenon, known as crowding, places a fundamental limit on conscious perception and object recognition. To understand the neural representation of crowded stimuli, we used fMRI and a forward encoding model to reconstruct the target-specific feature from multivoxel activation patterns evoked by orientation patches. Orientation-selective response profiles were constructed in V1–V4 for a target embedded in different contexts. Subjects of both sexes either directed their attention over all the orientation patches or selectively to the target. In the context with a weak crowding effect, attending to the target enhanced the orientation selectivity of the response profile; such effect increased along the visual pathway. In the context with a strong crowding effect, attending to the target enhanced the orientation selectivity of the response profile in the earlier visual area, but not in V4. The increase and decrease of orientation selectivity along the visual hierarchy demonstrate a contextual-dependent attention effect on crowded orientation signals: in the context with a weak crowding effect, selective attention gradually resolves the target from nearby distractors along the hierarchy; in the context with a strong crowding effect, while selective attention maintains the target feature in the earlier visual area, its effect decreases in the downstream area. Our findings reveal how the human visual system represents the target-specific feature at multiple stages under the limit of attention selection in a cluttered scene

    Representation of multiple objects in macaque category-selective areas

    Get PDF
    Object recognition in the natural world usually occurs in the presence of multiple surrounding objects, but responses of neurons in inferotemporal (IT) cortex, the large brain area responsible for object recognition, have mostly been studied only to isolated objects. We study rules governing responses to multiple objects by cells in two category-selective regions of macaque IT cortex, the middle lateral face patch (ML) and the middle body patch (MB). We find that responses of single ML and MB cells to pairs of objects can be explained by the widely accepted framework of normalization, with one added ingredient: homogeneous category selectivity of neighboring neurons forming the normalization pool. This rule leads to winner-take-all, contralateral-take-all, or weighted averaging behavior in single cells, depending on the category, spatial configuration, and relative contrast of the two objects. The winner-take-all behavior suggests a potential mechanism for clutter-invariant representation of face and bodies under certain conditions

    The representation of colored objects in macaque color patches

    Get PDF
    An important question about color vision is how does the brain represent the color of an object? The recent discovery of “color patches” in macaque inferotemporal (IT) cortex, the part of the brain responsible for object recognition, makes this problem experimentally tractable. Here we recorded neurons in three color patches, middle color patch CLC (central lateral color patch), and two anterior color patches ALC (anterior lateral color patch) and AMC (anterior medial color patch), while presenting images of objects systematically varied in hue. We found that all three patches contain high concentrations of hue-selective cells, and that the three patches use distinct computational strategies to represent colored objects: while all three patches multiplex hue and shape information, shape-invariant hue information is much stronger in anterior color patches ALC/AMC than CLC. Furthermore, hue and object shape specifically for primate faces/bodies are over-represented in AMC, but not in the other two patches

    The representation of colored objects in macaque color patches

    Get PDF
    An important question about color vision is how does the brain represent the color of an object? The recent discovery of “color patches” in macaque inferotemporal (IT) cortex, the part of the brain responsible for object recognition, makes this problem experimentally tractable. Here we recorded neurons in three color patches, middle color patch CLC (central lateral color patch), and two anterior color patches ALC (anterior lateral color patch) and AMC (anterior medial color patch), while presenting images of objects systematically varied in hue. We found that all three patches contain high concentrations of hue-selective cells, and that the three patches use distinct computational strategies to represent colored objects: while all three patches multiplex hue and shape information, shape-invariant hue information is much stronger in anterior color patches ALC/AMC than CLC. Furthermore, hue and object shape specifically for primate faces/bodies are over-represented in AMC, but not in the other two patches

    Hemifield columns co-opt ocular dominance column structure in human achiasma

    Get PDF
    In the absence of an optic chiasm, visual input to the right eye is represented in primary visual cortex (V1) in the right hemisphere, while visual input to the left eye activates V1 in the left hemisphere. Retinotopic mapping In V1 reveals that in each hemisphere left and right visual hemifield representations are overlaid (Hoffmann et al., 2012). To explain how overlapping hemifield representations in V1 do not impair vision, we tested the hypothesis that visual projections from nasal and temporal retina create interdigitated left and right visual hemifield representations in V1, similar to the ocular dominance columns observed in neurotypical subjects (Victor et al., 2000). We used high-resolution fMRI at 7 T to measure the spatial distribution of responses to left- and right-hemifield stimulation in one achiasmic subject. T_2-weighted 2D Spin Echo images were acquired at 0.8 mm isotropic resolution. The left eye was occluded. To the right eye, a presentation of flickering checkerboards alternated between the left and right visual fields in a blocked stimulus design. The participant performed a demanding orientation-discrimination task at fixation. A general linear model was used to estimate the preference of voxels in V1 to left- and right-hemifield stimulation. The spatial distribution of voxels with significant preference for each hemifield showed interdigitated clusters which densely packed V1 in the right hemisphere. The spatial distribution of hemifield-preference voxels in the achiasmic subject was stable between two days of testing and comparable in scale to that of human ocular dominance columns. These results are the first in vivo evidence showing that visual hemifield representations interdigitate in achiasmic V1 following a similar developmental course to that of ocular dominance columns in V1 with intact optic chiasm

    An object-topic map in primate inferotemporal cortex

    No full text

    Toward next-generation primate neuroscience: A collaboration-based strategic plan for integrative neuroimaging

    No full text
    Open science initiatives are creating opportunities to increase research coordination and impact in nonhuman primate (NHP) imaging. The PRIMatE Data and Resource Exchange community recently developed a collaboration-based strategic plan to advance NHP imaging as an integrative approach for multiscale neuroscience
    corecore