410 research outputs found

    The Representation of Parts and Wholes in Face-selective Cortex

    Get PDF
    Although face perception is often characterized as depending on holistic, rather than part-based, processing, there is behavioral evidence for independent representations of face parts. Recent work has linked ‘‘face-selective’’ regions defined with functional magnetic resonance imaging (fMRI) to holistic processing, but the response of these areas to face parts remains unclear. Here we examine part-based versus holistic processing in ‘‘face-selective’’ visual areas using face stimuli manipulated in binocular disparity to appear either behind or in front of a set of stripes [Nakayama, K., Shimojo, S., & Silverman, G. H. Stereoscopic depth: Its relation to image segmentation, grouping, and the recognition of occluded objects. Perception, 18, 55–68, 1989]. While the first case will be ‘‘filled in’’ by the visual system and perceived holistically, we demonstrate behaviorally that the latter cannot be completed amodally, and thus is perceived as parts. Using these stimuli in fMRI, we found significant responses to both depth manipulations in inferior occipital gyrus and middle fusiform gyrus (MFG) ‘‘faceselective’’ regions, suggesting that neural populations in these areas encode both parts and wholes. In comparison, applying these depth manipulations to control stimuli (alphanumeric characters) elicited much smaller signal changes within faceselective regions, indicating that the part-based representation for faces is separate from that for objects. The combined adaptation data also showed an interaction of depth and familiarity within the right MFG, with greater adaptation in the back (holistic) condition relative to parts for familiar but not unfamiliar faces. Together, these data indicate that face-selective regions of occipito-temporal cortex engage in both part-based and holistic processing. The relative recruitment of such representations may be additionally influenced by external factors such as familiarity

    fMRI Activity in Posterior Parietal Cortex Relates to the Perceptual Use of Binocular Disparity for Both Signal-In-Noise and Feature Difference Tasks.

    Get PDF
    Visually guided action and interaction depends on the brain's ability to (a) extract and (b) discriminate meaningful targets from complex retinal inputs. Binocular disparity is known to facilitate this process, and it is an open question how activity in different parts of the visual cortex relates to these fundamental visual abilities. Here we examined fMRI responses related to performance on two different tasks (signal-in-noise "coarse" and feature difference "fine" tasks) that have been widely used in previous work, and are believed to differentially target the visual processes of signal extraction and feature discrimination. We used multi-voxel pattern analysis to decode depth positions (near vs. far) from the fMRI activity evoked while participants were engaged in these tasks. To look for similarities between perceptual judgments and brain activity, we constructed 'fMR-metric' functions that described decoding performance as a function of signal magnitude. Thereafter we compared fMR-metric and psychometric functions, and report an association between judged depth and fMRI responses in the posterior parietal cortex during performance on both tasks. This highlights common stages of processing during perceptual performance on these tasks.This is the final version of the article. It first appeared from PLOS via http://dx.doi.org/10.1371/journal.pone.014069

    The Representation of Object Distance: Evidence from Neuroimaging and Neuropsychology

    Get PDF
    Perceived distance in two-dimensional (2D) images relies on monocular distance cues. Here, we examined the representation of perceived object distance using a continuous carry-over adaptation design for fMRI. The task was to look at photographs of objects and make a judgment as to whether or not the item belonged in the kitchen. Importantly, this task was orthogonal to the variable of interest: the object's perceived distance from the viewer. In Experiment 1, whole brain group analyses identified bilateral clusters in the superior occipital gyrus (approximately area V3/V3A) that showed parametric adaptation to relative changes in perceived distance. In Experiment 2, retinotopic analyses confirmed that area V3A/B reflected the greatest magnitude of response to monocular changes in perceived distance. In Experiment 3, we report that the functional activations overlap with the occipito-parietal lesions in a patient with impaired distance perception, showing that the same regions monitor implied (2D) and actual (three-dimensional) distance. These data suggest that distance information is automatically processed even when it is task-irrelevant and that this process relies on superior occipital areas in and around area V3A

    Bringing the real world into the fMRI scanner: Repetition effects for pictures versus real objects

    Get PDF
    Our understanding of the neural underpinnings of perception is largely built upon studies employing 2-dimensional (2D) planar images. Here we used slow event-related functional imaging in humans to examine whether neural populations show a characteristic repetition-related change in haemodynamic response for real-world 3-dimensional (3D) objects, an effect commonly observed using 2D images. As expected, trials involving 2D pictures of objects produced robust repetition effects within classic object-selective cortical regions along the ventral and dorsal visual processing streams. Surprisingly, however, repetition effects were weak, if not absent on trials involving the 3D objects. These results suggest that the neural mechanisms involved in processing real objects may therefore be distinct from those that arise when we encounter a 2D representation of the same items. These preliminary results suggest the need for further research with ecologically valid stimuli in other imaging designs to broaden our understanding of the neural mechanisms underlying human vision

    Perceptual integration for qualitatively different 3-D cues in the human brain.

    Get PDF
    The visual system's flexibility in estimating depth is remarkable: We readily perceive 3-D structure under diverse conditions from the seemingly random dots of a "magic eye" stereogram to the aesthetically beautiful, but obviously flat, canvasses of the Old Masters. Yet, 3-D perception is often enhanced when different cues specify the same depth. This perceptual process is understood as Bayesian inference that improves sensory estimates. Despite considerable behavioral support for this theory, insights into the cortical circuits involved are limited. Moreover, extant work tested quantitatively similar cues, reducing some of the challenges associated with integrating computationally and qualitatively different signals. Here we address this challenge by measuring fMRI responses to depth structures defined by shading, binocular disparity, and their combination. We quantified information about depth configurations (convex "bumps" vs. concave "dimples") in different visual cortical areas using pattern classification analysis. We found that fMRI responses in dorsal visual area V3B/KO were more discriminable when disparity and shading concurrently signaled depth, in line with the predictions of cue integration. Importantly, by relating fMRI and psychophysical tests of integration, we observed a close association between depth judgments and activity in this area. Finally, using a cross-cue transfer test, we found that fMRI responses evoked by one cue afford classification of responses evoked by the other. This reveals a generalized depth representation in dorsal visual cortex that combines qualitatively different information in line with 3-D perception

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Integration of texture and disparity cues to surface slant in dorsal visual cortex.

    Get PDF
    Reliable estimation of three-dimensional (3D) surface orientation is critical for recognizing and interacting with complex 3D objects in our environment. Human observers maximize the reliability of their estimates of surface slant by integrating multiple depth cues. Texture and binocular disparity are two such cues, but they are qualitatively very different. Existing evidence suggests that representations of surface tilt from each of these cues coincide at the single-neuron level in higher cortical areas. However, the cortical circuits responsible for 1) integration of such qualitatively distinct cues and 2) encoding the slant component of surface orientation have not been assessed. We tested for cortical responses related to slanted plane stimuli that were defined independently by texture, disparity, and combinations of these two cues. We analyzed the discriminability of functional MRI responses to two slant angles using multivariate pattern classification. Responses in visual area V3B/KO to stimuli containing congruent cues were more discriminable than those elicited by single cues, in line with predictions based on the fusion of slant estimates from component cues. This improvement was specific to congruent combinations of cues: incongruent cues yielded lower decoding accuracies, which suggests the robust use of individual cues in cases of large cue conflicts. These data suggest that area V3B/KO is intricately involved in the integration of qualitatively dissimilar depth cues

    Late development of cue integration is linked to sensory fusion in cortex

    Get PDF
    Adults optimize perceptual judgements by integrating different types of sensory information [ 1, 2 ]. This engages specialized neural circuits that fuse signals from the same [ 3–5 ] or different [ 6 ] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [ 7–9 ]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [ 10 ]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6–12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [ 3–5 ]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [ 11, 12 ], the brain circuits that fuse cues take a very long time to develop

    Brain networks underlying bistable perception

    Get PDF
    Bistable stimuli, such as the Necker Cube, demonstrate that experience can change in the absence of changes in the environment. Such phenomena can be used to assess stimulus-independent aspects of conscious experience. The current study used resting state functional magnetic resonance imaging (rs-fMRI) to index stimulus-independent changes in neural activity to understand the neural architecture that determines dominance durations during bistable perception (using binocular rivalry and Necker cube stimuli). Anterior regions of the Superior Parietal Lobule (SPL) exhibited robust connectivity with regions of primary sensorimotor cortex. The strength of this region’s connectivity with the striatum predicted shorter dominance durations during binocular rivalry, whereas its connectivity to pre-motor cortex predicted longer dominance durations for the Necker Cube. Posterior regions of the SPL, on the other hand, were coupled to associative cortex in the temporal and frontal lobes. The posterior SPL’s connectivity to the temporal lobe predicted longer dominance during binocular rivalry. In conjunction with prior work, these data suggest that the anterior SPL contributes to perceptual rivalry through the inhibition of incongruent bottom up information, whereas the posterior SPL influences rivalry by supporting the current interpretation of a bistable stimulus. Our data suggests that the functional connectivity of the SPL with regions of sensory, motor and associative cortex allows it to regulate the interpretation of the environment that forms the focus of conscious attention at a specific moment in time
    corecore