29 research outputs found

    Changing Human Visual Field Organization from Early Visual to Extra-Occipital Cortex

    Get PDF
    BACKGROUND: The early visual areas have a clear topographic organization, such that adjacent parts of the cortical surface represent distinct yet adjacent parts of the contralateral visual field. We examined whether cortical regions outside occipital cortex show a similar organization. METHODOLOGY/PRINCIPAL FINDINGS: The BOLD responses to discrete visual field locations that varied in both polar angle and eccentricity were measured using two different tasks. As described previously, numerous occipital regions are both selective for the contralateral visual field and show topographic organization within that field. Extra-occipital regions are also selective for the contralateral visual field, but possess little (or no) topographic organization. A regional analysis demonstrates that this weak topography is not due to increased receptive field size in extra-occipital areas. CONCLUSIONS/SIGNIFICANCE: A number of extra-occipital areas are identified that are sensitive to visual field location. Neurons in these areas corresponding to different locations in the contralateral visual field do not demonstrate any regular or robust topographic organization, but appear instead to be intermixed on the cortical surface. This suggests a shift from processing that is predominately local in visual space, in occipital areas, to global, in extra-occipital areas. Global processing fits with a role for these extra-occipital areas in selecting a spatial locus for attention and/or eye-movements

    The influence of spatial pattern on visual short-term memory for contrast

    Get PDF
    Several psychophysical studies of visual short-term memory (VSTM) have shown high-fidelity storage capacity for many properties of visual stimuli. On judgments of the spatial frequency of gratings, for example, discrimination performance does not decrease significantly, even for memory intervals of up to 30 s. For other properties, such as stimulus orientation and contrast, however, such “perfect storage” behavior is not found, although the reasons for this difference remain unresolved. Here, we report two experiments in which we investigated the nature of the representation of stimulus contrast in VSTM using spatially complex, two-dimensional random-noise stimuli. We addressed whether information about contrast per se is retained during the memory interval by using a test stimulus with the same spatial structure but either the same or the opposite local contrast polarity, with respect to the comparison (i.e., remembered) stimulus. We found that discrimination thresholds got steadily worse with increasing duration of the memory interval. Furthermore, performance was better when the test and comparison stimuli had the same local contrast polarity than when they were contrast-reversed. Finally, when a noise mask was introduced during the memory interval, its disruptive effect was maximal when the spatial configuration of its constituent elements was uncorrelated with those of the comparison and test stimuli. These results suggest that VSTMfor contrast is closely tied to the spatial configuration of stimuli and is not transformed into a more abstract representation

    Expert Financial Advice Neurobiologically “Offloads” Financial Decision-Making under Risk

    Get PDF
    BACKGROUND: Financial advice from experts is commonly sought during times of uncertainty. While the field of neuroeconomics has made considerable progress in understanding the neurobiological basis of risky decision-making, the neural mechanisms through which external information, such as advice, is integrated during decision-making are poorly understood. In the current experiment, we investigated the neurobiological basis of the influence of expert advice on financial decisions under risk. METHODOLOGY/PRINCIPAL FINDINGS: While undergoing fMRI scanning, participants made a series of financial choices between a certain payment and a lottery. Choices were made in two conditions: 1) advice from a financial expert about which choice to make was displayed (MES condition); and 2) no advice was displayed (NOM condition). Behavioral results showed a significant effect of expert advice. Specifically, probability weighting functions changed in the direction of the expert's advice. This was paralleled by neural activation patterns. Brain activations showing significant correlations with valuation (parametric modulation by value of lottery/sure win) were obtained in the absence of the expert's advice (NOM) in intraparietal sulcus, posterior cingulate cortex, cuneus, precuneus, inferior frontal gyrus and middle temporal gyrus. Notably, no significant correlations with value were obtained in the presence of advice (MES). These findings were corroborated by region of interest analyses. Neural equivalents of probability weighting functions showed significant flattening in the MES compared to the NOM condition in regions associated with probability weighting, including anterior cingulate cortex, dorsolateral PFC, thalamus, medial occipital gyrus and anterior insula. Finally, during the MES condition, significant activations in temporoparietal junction and medial PFC were obtained. CONCLUSIONS/SIGNIFICANCE: These results support the hypothesis that one effect of expert advice is to "offload" the calculation of value of decision options from the individual's brain

    Dynamic Spatial Coding within the Dorsal Frontoparietal Network during a Visual Search Task

    Get PDF
    To what extent are the left and right visual hemifields spatially coded in the dorsal frontoparietal attention network? In many experiments with neglect patients, the left hemisphere shows a contralateral hemifield preference, whereas the right hemisphere represents both hemifields. This pattern of spatial coding is often used to explain the right-hemispheric dominance of lesions causing hemispatial neglect. However, pathophysiological mechanisms of hemispatial neglect are controversial because recent experiments on healthy subjects produced conflicting results regarding the spatial coding of visual hemifields. We used an fMRI paradigm that allowed us to distinguish two attentional subprocesses during a visual search task. Either within the left or right hemifield subjects first attended to stationary locations (spatial orienting) and then shifted their attentional focus to search for a target line. Dynamic changes in spatial coding of the left and right hemifields were observed within subregions of the dorsal front-parietal network: During stationary spatial orienting, we found the well-known spatial pattern described above, with a bilateral hemifield representation in the right hemisphere and a contralateral preference in the left hemisphere. However, during search, the right hemisphere had a contralateral preference and the left hemisphere equally represented both hemifields. This finding leads to novel perspectives regarding models of visuospatial attention and hemispatial neglect

    Neural responses to Mooney images reveal a modular representation of faces in human visual cortex

    No full text
    The way in which information about objects is represented in visual cortex remains controversial. One model of human object recognition poses that information is processed in modules, highly specialised for different categories of objects; an opposing model appeals to a distributed representation across a large network of visual areas. We addressed this debate by monitoring activity in face- and object-selective areas while human subjects viewed ambiguous face stimuli (Mooney faces). The measured neural response in the face-selective region of the fusiform gyrus was greater when subjects reported seeing a face than when they perceived the image as a collection of blobs. In contrast, there was no difference in magnetic resonance response between face and no-face perceived events in either the face-selective voxels of the superior temporal sulcus or the object-selective voxels of the parahippocampal gyrus and lateral occipital complex. These results challenge the concept that neural representation of faces is distributed and overlapping and suggest that the fusiform gyrus is tightly linked to the awareness of face

    Neural responses to Mooney images reveal a modular representation of faces in human visual cortex

    No full text
    The way in which information about objects is represented in visual cortex remains controversial. One model of human object recognition poses that information is processed in modules, highly specialised for different categories of objects; an opposing model appeals to a distributed representation across a large network of visual areas. We addressed this debate by monitoring activity in face- and object-selective areas while human subjects viewed ambiguous face stimuli (Mooney faces). The measured neural response in the face-selective region of the fusiform gyrus was greater when subjects reported seeing a face than when they perceived the image as a collection of blobs. In contrast, there was no difference in magnetic resonance response between face and no-face perceived events in either the face-selective voxels of the superior temporal sulcus or the object-selective voxels of the parahippocampal gyrus and lateral occipital complex. These results challenge the concept that neural representation of faces is distributed and overlapping and suggest that the fusiform gyrus is tightly linked to the awareness of face

    Activity in the Fusiform Gyrus Predicts Conscious Perception of Rubin's Vase-face Illusion

    No full text
    We localized regions in the fusiform gyrus and superior temporal sulcus that were more active when subjects viewed photographs of real faces than when they viewed complex inanimate objects and other areas in the parahippocampal gyrus and the lateral occipital lobe that showed more activity during the presentation of nonface objects. Event-related functional magnetic resonance imaging was then used to monitor activity in these extrastriate visual areas while subjects viewed Rubin's vase–face stimulus and indicated switches in perception. Since the spontaneous shifts in interpretation were too rapid for direct correlation with hemodynamic responses, each reported percept (faces or vase) was prolonged by suddenly adding subtle local contrast gradients (embossing) to one side or the other of the figure–ground boundary, stabilizing the percept. Under these conditions, only face-selective areas in the fusiform gyrus responded more strongly during the perception of faces. To control for effects of the physical change to Rubin's stimulus (i.e., addition of embossing), we compared activity when the face contours were embossed after the subject had just reported the onset of perception of either faces or vase. Activity in the fusiform face area responded more strongly under the first condition, despite the fact that the physical stimulus sequences were identical. Moreover, on a trial-to-trial basis, the activity was statistically predictive of the subjects' responses, suggesting that the conscious perception of faces could be made explicit in this extrastriate visual area

    Activity in the Fusiform Gyrus Predicts Conscious Perception of Rubin's Vase-face Illusion

    No full text
    We localized regions in the fusiform gyrus and superior temporal sulcus that were more active when subjects viewed photographs of real faces than when they viewed complex inanimate objects and other areas in the parahippocampal gyrus and the lateral occipital lobe that showed more activity during the presentation of nonface objects. Event-related functional magnetic resonance imaging was then used to monitor activity in these extrastriate visual areas while subjects viewed Rubin's vase–face stimulus and indicated switches in perception. Since the spontaneous shifts in interpretation were too rapid for direct correlation with hemodynamic responses, each reported percept (faces or vase) was prolonged by suddenly adding subtle local contrast gradients (embossing) to one side or the other of the figure–ground boundary, stabilizing the percept. Under these conditions, only face-selective areas in the fusiform gyrus responded more strongly during the perception of faces. To control for effects of the physical change to Rubin's stimulus (i.e., addition of embossing), we compared activity when the face contours were embossed after the subject had just reported the onset of perception of either faces or vase. Activity in the fusiform face area responded more strongly under the first condition, despite the fact that the physical stimulus sequences were identical. Moreover, on a trial-to-trial basis, the activity was statistically predictive of the subjects' responses, suggesting that the conscious perception of faces could be made explicit in this extrastriate visual area

    Contribution of large scale biases in decoding of direction-of-motion from high-resolution fMRI data in human early visual cortex

    Get PDF
    Previous studies have demonstrated that the perceived direction of motion of a visual stimulus can be decoded from the pattern of functional magnetic resonance imaging (fMRI) responses in occipital cortex using multivariate analysis methods (Kamitani and Tong, 2006). One possible mechanism for this is a difference in the sampling of direction selective cortical columns between voxels, implying that information at a level smaller than the voxel size might be accessible with fMRI. Alternatively, multivariate analysis methods might be driven by the organization of neurons into clusters or even orderly maps at a much larger scale. To assess the possible sources of the direction selectivity observed in fMRI data, we tested how classification accuracy varied across different visual areas and subsets of voxels for classification of motion-direction. To enable high spatial resolution functional MRI measurements (1.5 mm isotropic voxels), data were collected at 7 T. To test whether information about the direction of motion is represented at the scale of retinotopic maps, we looked at classification performance after combining data across different voxels within visual areas (V1–3 and MT +/V5) before training the multivariate classifier. A recent study has shown that orientation biases in V1 are both necessary and sufficient to explain classification of stimulus orientation (Freeman et al., 2011). Here, we combined voxels with similar visual field preference as determined in separate retinotopy measurements and observed that classification accuracy was preserved when averaging in this ‘retinotopically restricted’ way, compared to random averaging of voxels. This insensitivity to averaging of voxels (with similar visual angle preference) across substantial distances in cortical space suggests that there are large-scale biases at the level of retinotopic maps underlying our ability to classify direction of motion
    corecore