2,809 research outputs found

    The occipital place area represents the local elements of scenes

    Get PDF
    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties.National Institutes of Health (U.S.) (Grant EY013455

    Neural codes for one’s own position and direction in a real-world “vista” environment

    Get PDF
    Humans, like animals, rely on an accurate knowledge of one’s spatial position and facing direction to keep orientated in the surrounding space. Although previous neuroimaging studies demonstrated that scene-selective regions (the parahippocampal place area or PPA, the occipital place area or OPA and the retrosplenial complex or RSC), and the hippocampus (HC) are implicated in coding position and facing direction within small-(room-sized) and large-scale navigational environments, little is known about how these regions represent these spatial quantities in a large open-field environment. Here, we used functional magnetic resonance imaging (fMRI) in humans to explore the neural codes of these navigationally-relevant information while participants viewed images which varied for position and facing direction within a familiar, real-world circular square. We observed neural adaptation for repeated directions in the HC, even if no navigational task was required. Further, we found that the amount of knowledge of the environment interacts with the PPA selectivity in encoding positions: individuals who needed more time to memorize positions in the square during a preliminary training task showed less neural attenuation in this scene-selective region. We also observed adaptation effects, which reflect the real distances between consecutive positions, in scene-selective regions but not in the HC. When examining the multi-voxel patterns of activity we observed that scene-responsive regions and the HC encoded both spatial information and that the RSC classification accuracy for positions was higher in individuals scoring higher to a self-reported questionnaire of spatial abilities. Our findings provide new insight into how the human brain represents a real, large-scale “vista” space, demonstrating the presence of neural codes for position and direction in both scene-selective and hippocampal regions, and revealing the existence, in the former regions, of a map-like spatial representation reflecting real-world distance between consecutive positions

    Neural representation of geometry and surface properties in object and scene perception

    Get PDF
    Multiple cortical regions are crucial for perceiving the visual world, yet the processes shaping representations in these regions are unclear. To address this issue, we must elucidate how perceptual features shape representations of the environment. Here, we explore how the weighting of different visual features affects neural representations of objects and scenes, focusing on the scene-selective parahippocampal place area (PPA), but additionally including the retrosplenial complex (RSC), occipital place area (OPA), lateral occipital (LO) area, fusiform face area (FFA) and occipital face area (OFA). Across three experiments, we examined functional magnetic resonance imaging (fMRI) activity while human observers viewed scenes and objects that varied in geometry (shape/layout) and surface properties (texture/material). Interestingly, we found equal sensitivity in the PPA for these properties within a scene, revealing that spatial-selectivity alone does not drive activation within this cortical region. We also observed sensitivity to object texture in PPA, but not to the same degree as scene texture, and representations in PPA varied when objects were placed within scenes. We conclude that PPA may process surface properties in a domain-specific manner, and that the processing of scene texture and geometry is equally-weighted in PPA and may be mediated by similar underlying neuronal mechanisms

    Visual pathways from the perspective of cost functions and multi-task deep neural networks

    Get PDF
    Vision research has been shaped by the seminal insight that we can understand the higher-tier visual cortex from the perspective of multiple functional pathways with different goals. In this paper, we try to give a computational account of the functional organization of this system by reasoning from the perspective of multi-task deep neural networks. Machine learning has shown that tasks become easier to solve when they are decomposed into subtasks with their own cost function. We hypothesize that the visual system optimizes multiple cost functions of unrelated tasks and this causes the emergence of a ventral pathway dedicated to vision for perception, and a dorsal pathway dedicated to vision for action. To evaluate the functional organization in multi-task deep neural networks, we propose a method that measures the contribution of a unit towards each task, applying it to two networks that have been trained on either two related or two unrelated tasks, using an identical stimulus set. Results show that the network trained on the unrelated tasks shows a decreasing degree of feature representation sharing towards higher-tier layers while the network trained on related tasks uniformly shows high degree of sharing. We conjecture that the method we propose can be used to analyze the anatomical and functional organization of the visual system and beyond. We predict that the degree to which tasks are related is a good descriptor of the degree to which they share downstream cortical-units.Comment: 16 pages, 5 figure

    A data driven approach to understanding the organization of high-level visual cortex

    Get PDF
    The neural representation in scene-selective regions of human visual cortex, such as the PPA, has been linked to the semantic and categorical properties of the images. However, the extent to which patterns of neural response in these regions reflect more fundamental organizing principles is not yet clear. Existing studies generally employ stimulus conditions chosen by the experimenter, potentially obscuring the contribution of more basic stimulus dimensions. To address this issue, we used a data-driven approach to describe a large database of scenes (>100,000 images) in terms of their visual properties (orientation, spatial frequency, spatial location). K-means clustering was then used to select images from distinct regions of this feature space. Images in each cluster did not correspond to typical scene categories. Nevertheless, they elicited distinct patterns of neural response in the PPA. Moreover, the similarity of the neural response to different clusters in the PPA could be predicted by the similarity in their image properties. Interestingly, the neural response in the PPA was also predicted by perceptual responses to the scenes, but not by their semantic properties. These findings provide an image-based explanation for the emergence of higher-level representations in scene-selective regions of the human brain

    Transcranial magnetic stimulation to the occipital place area biases gaze during scene viewing

    Get PDF
    We can understand viewed scenes and extract task-relevant information within a few hundred milliseconds. This process is generally supported by three cortical regions that show selectivity for scene images: parahippocampal place area (PPA), medial place area (MPA) and occipital place area (OPA). Prior studies have focused on the visual information each region is responsive to, usually within the context of recognition or navigation. Here, we move beyond these tasks to investigate gaze allocation during scene viewing. Eye movements rely on a scene’s visual representation to direct saccades, and thus foveal vision. In particular, we focus on the contribution of OPA, which is i) located in occipito-parietal cortex, likely feeding information into parts of the dorsal pathway critical for eye movements, and ii) contains strong retinotopic representations of the contralateral visual field. Participants viewed scene images for 1034 ms while their eye movements were recorded. On half of the trials, a 500 ms train of five transcranial magnetic stimulation (TMS) pulses was applied to the participant’s cortex, starting at scene onset. TMS was applied to the right hemisphere over either OPA or the occipital face area (OFA), which also exhibits a contralateral visual field bias but shows selectivity for face stimuli. Participants generally made an overall left-to-right, top-to-bottom pattern of eye movements across all conditions. When TMS was applied to OPA, there was an increased saccade latency for eye movements toward the contralateral relative to the ipsilateral visual field after the final TMS pulse (400ms). Additionally, TMS to the OPA biased fixation positions away from the contralateral side of the scene compared to the control condition, while the OFA group showed no such effect. There was no effect on horizontal saccade amplitudes. These combined results suggest that OPA might serve to represent local scene information that can then be utilized by visuomotor control networks to guide gaze allocation in natural scenes

    Retinotopic and lateralized processing of spatial frequencies in human visual cortex during scene categorization.

    Get PDF
    International audienceUsing large natural scenes filtered in spatial frequencies, we aimed to demonstrate that spatial frequency processing could not only be retinotopically mapped but could also be lateralized in both hemispheres. For this purpose, participants performed a categorization task using large black and white photographs of natural scenes (indoors vs. outdoors, with a visual angle of 24° × 18°) filtered in low spatial frequencies (LSF), high spatial frequencies (HSF), and nonfiltered scenes, in block-designed fMRI recording sessions. At the group level, the comparison between the spatial frequency content of scenes revealed first that, compared with HSF, LSF scene categorization elicited activation in the anterior half of the calcarine fissures linked to the peripheral visual field, whereas, compared with LSF, HSF scene categorization elicited activation in the posterior part of the occipital lobes, which are linked to the fovea, according to the retinotopic property of visual areas. At the individual level, functional activations projected on retinotopic maps revealed that LSF processing was mapped in the anterior part of V1, whereas HSF processing was mapped in the posterior and ventral part of V2, V3, and V4. Moreover, at the group level, direct interhemispheric comparisons performed on the same fMRI data highlighted a right-sided occipito-temporal predominance for LSF processing and a left-sided temporal cortex predominance for HSF processing, in accordance with hemispheric specialization theories. By using suitable method of analysis on the same data, our results enabled us to demonstrate for the first time that spatial frequencies processing is mapped retinotopically and lateralized in human occipital cortex

    Revealing Connections in Object and Scene Processing Using Consecutive TMS and fMR-Adaptation

    Get PDF
    When processing the visual world, our brain must perform many computations that may occur across several regions. It is important to understand communications between regions in order to understand perceptual processes underlying processing of our environment. We sought to determine the connectivity of object and scene processing regions of the cortex, which are not fully established. In order to determine these connections repetitive transcranial magnetic stimulation (rTMS) and functional magnetic resonance-adaptation (fMR-A) were paired together. rTMS was applied to object-selective lateral occipital (LO) and scene-selective transverse occipital sulcus (TOS). Immediately after stimulation, participants underwent fMR-A, and pre- and post-TMS responses were compared. TMS disrupted remote regions revealing connections from LO and TOS to remote object and scene-selective regions in the occipital cortex. In addition, we report important neural correlates regarding the transference of object related information between modalities, from LO to outside the ventral network to parietal and frontal areas

    A common neural substrate for processing scenes and egomotion-compatible visual motion

    Get PDF
    Neuroimaging studies have revealed two separate classes of category-selective regions specialized in optic flow (egomotion-compatible) processing and in scene/place perception. Despite the importance of both optic flow and scene/place recognition to estimate changes in position and orientation within the environment during self-motion, the possible functional link between egomotion- and scene-selective regions has not yet been established. Here we reanalyzed functional magnetic resonance images from a large sample of participants performing two well-known “localizer” fMRI experiments, consisting in passive viewing of navigationally relevant stimuli such as buildings and places (scene/place stimulus) and coherently moving fields of dots simulating the visual stimulation during self-motion (flow fields). After interrogating the egomotion-selective areas with respect to the scene/place stimulus and the scene-selective areas with respect to flow fields, we found that the egomotion-selective areas V6+ and pIPS/V3A responded bilaterally more to scenes/places compared to faces, and all the scene-selective areas (parahippocampal place area or PPA, retrosplenial complex or RSC, and occipital place area or OPA) responded more to egomotion-compatible optic flow compared to random motion. The conjunction analysis between scene/place and flow field stimuli revealed that the most important focus of common activation was found in the dorsolateral parieto-occipital cortex, spanning the scene-selective OPA and the egomotion-selective pIPS/V3A. Individual inspection of the relative locations of these two regions revealed a partial overlap and a similar response profile to an independent low-level visual motion stimulus, suggesting that OPA and pIPS/V3A may be part of a unique motion-selective complex specialized in encoding both egomotion- and scene-relevant information, likely for the control of navigation in a structured environment
    corecore