61 research outputs found

    Modality-independent coding of spatial layout in the human brain

    Get PDF
    SummaryIn many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain

    The occipital place area represents the local elements of scenes

    Get PDF
    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties.National Institutes of Health (U.S.) (Grant EY013455

    Patterns of neural response in scene-selective regions of the human brain are affected by low-level manipulations of spatial frequency

    Get PDF
    Neuroimaging studies have found distinct patterns of response to different categories of scenes. However, the relative importance of low-level image properties in generating these response patterns is not fully understood. To address this issue, we directly manipulated the low level properties of scenes in a way that preserved the ability to perceive the category. We then measured the effect of these manipulations on category-selective patterns of fMRI response in the PPA, RSC and OPA. In Experiment 1, a horizontal-pass or vertical-pass orientation filter was applied to images of indoor and natural scenes. The image filter did not have a large effect on the patterns of response. For example, vertical- and horizontal-pass filtered indoor images generated similar patterns of response. Similarly, vertical- and horizontal-pass filtered natural scenes generated similar patterns of response. In Experiment 2, low-pass or high-pass spatial frequency filters were applied to the images. We found that image filter had a marked effect on the patterns of response in scene-selective regions. For example, low-pass indoor images generated similar patterns of response to low-pass natural images. The effect of filter varied across different scene-selective regions, suggesting differences in the way that scenes are represented in these regions. These results indicate that patterns of response in scene-selective regions are sensitive to the low-level properties of the image, particularly the spatial frequency content

    A data driven approach to understanding the organization of high-level visual cortex

    Get PDF
    The neural representation in scene-selective regions of human visual cortex, such as the PPA, has been linked to the semantic and categorical properties of the images. However, the extent to which patterns of neural response in these regions reflect more fundamental organizing principles is not yet clear. Existing studies generally employ stimulus conditions chosen by the experimenter, potentially obscuring the contribution of more basic stimulus dimensions. To address this issue, we used a data-driven approach to describe a large database of scenes (>100,000 images) in terms of their visual properties (orientation, spatial frequency, spatial location). K-means clustering was then used to select images from distinct regions of this feature space. Images in each cluster did not correspond to typical scene categories. Nevertheless, they elicited distinct patterns of neural response in the PPA. Moreover, the similarity of the neural response to different clusters in the PPA could be predicted by the similarity in their image properties. Interestingly, the neural response in the PPA was also predicted by perceptual responses to the scenes, but not by their semantic properties. These findings provide an image-based explanation for the emergence of higher-level representations in scene-selective regions of the human brain

    Neural Representations of a Real-World Environment

    Get PDF
    The ability to represent the spatial structure of the environment is critical for successful navigation. Extensive research using animal models has revealed the existence of specialized neurons that appear to code for spatial information in their firing patterns. However, little is known about which regions of the human brain support representations of large-scale space. To address this gap in the literature, we performed three functional magnetic resonance imaging (fMRI) experiments aimed at characterizing the representations of locations, headings, landmarks, and distances in a large environment for which our subjects had extensive real-world navigation experience: their college campus. We scanned University of Pennsylvania students while they made decisions about places on campus and then tested for spatial representations using multivoxel pattern analysis and fMRI adaptation. In Chapter 2, we tested for representations of the navigator\u27s current location and heading, information necessary for self-localization. In Chapter 3, we tested whether these location and heading representations were consistent across perception and spatial imagery. Finally, in Chapter 4, we tested for representations of landmark identity and the distances between landmarks. Across the three experiments, we observed that specific regions of medial temporal and medial parietal cortex supported long-term memory representations of navigationally-relevant spatial information. These results serve to elucidate the functions of these regions and offer a framework for understanding the relationship between spatial representations in the medial temporal lobe and in high-level visual regions. We discuss our findings in the context of the broader spatial cognition literature, including implications for studies of both humans and animal models

    A common neural substrate for processing scenes and egomotion-compatible visual motion

    Get PDF
    Neuroimaging studies have revealed two separate classes of category-selective regions specialized in optic flow (egomotion-compatible) processing and in scene/place perception. Despite the importance of both optic flow and scene/place recognition to estimate changes in position and orientation within the environment during self-motion, the possible functional link between egomotion- and scene-selective regions has not yet been established. Here we reanalyzed functional magnetic resonance images from a large sample of participants performing two well-known “localizer” fMRI experiments, consisting in passive viewing of navigationally relevant stimuli such as buildings and places (scene/place stimulus) and coherently moving fields of dots simulating the visual stimulation during self-motion (flow fields). After interrogating the egomotion-selective areas with respect to the scene/place stimulus and the scene-selective areas with respect to flow fields, we found that the egomotion-selective areas V6+ and pIPS/V3A responded bilaterally more to scenes/places compared to faces, and all the scene-selective areas (parahippocampal place area or PPA, retrosplenial complex or RSC, and occipital place area or OPA) responded more to egomotion-compatible optic flow compared to random motion. The conjunction analysis between scene/place and flow field stimuli revealed that the most important focus of common activation was found in the dorsolateral parieto-occipital cortex, spanning the scene-selective OPA and the egomotion-selective pIPS/V3A. Individual inspection of the relative locations of these two regions revealed a partial overlap and a similar response profile to an independent low-level visual motion stimulus, suggesting that OPA and pIPS/V3A may be part of a unique motion-selective complex specialized in encoding both egomotion- and scene-relevant information, likely for the control of navigation in a structured environment

    A common neural substrate for processing scenes and egomotion-compatible visual motion

    Get PDF
    Neuroimaging studies have revealed two separate classes of category-selective regions specialized in optic flow (egomotion-compatible) processing and in scene/place perception. Despite the importance of both optic flow and scene/place recognition to estimate changes in position and orientation within the environment during self-motion, the possible functional link between egomotion- and scene-selective regions has not yet been established. Here we reanalyzed functional magnetic resonance images from a large sample of participants performing two well-known \u201clocalizer\u201d fMRI experiments, consisting in passive viewing of navigationally relevant stimuli such as buildings and places (scene/place stimulus) and coherently moving fields of dots simulating the visual stimulation during self-motion (flow fields). After interrogating the egomotion-selective areas with respect to the scene/place stimulus and the scene-selective areas with respect to flow fields, we found that the egomotion-selective areas V6+ and pIPS/V3A responded bilaterally more to scenes/places compared to faces, and all the scene-selective areas (parahippocampal place area or PPA, retrosplenial complex or RSC, and occipital place area or OPA) responded more to egomotion-compatible optic flow compared to random motion. The conjunction analysis between scene/place and flow field stimuli revealed that the most important focus of common activation was found in the dorsolateral parieto-occipital cortex, spanning the scene-selective OPA and the egomotion-selective pIPS/V3A. Individual inspection of the relative locations of these two regions revealed a partial overlap and a similar response profile to an independent low-level visual motion stimulus, suggesting that OPA and pIPS/V3A may be part of a unique motion-selective complex specialized in encoding both egomotion- and scene-relevant information, likely for the control of navigation in a structured environment

    Neural Representations of Self-Motion During Natural Scenes in the Human Brain

    Get PDF
    Navigating through the environment is one of the important everyday tasks of the visual system. This task relies on processing of at least two visual cues: visual motion, and scene content. Our sense of motion heavily relies on understanding and separating visual cues resulting from object motion and self-motion. Processing and understanding of visual scenes is an equally abundant task we are exposed to in our everyday environment. Together, motion and scene processing allow us to fulfill navigation tasks such as way finding and spatial updating. In terms of neural processing, both, regions involved in motion processing, and regions involved in scene processing have been studied in great detail. However, how motion regions are influenced by scene content and how scene regions are involved in motion processing has barely been addressed. In order to understand how self-motion and scene processing interact in the human brain, I completed a series of studies as part of this thesis. First of all, using planar horizontal motion and visual scenes, the first study of this thesis investigates motion responses of scene regions. The next study investigates whether eye-centered or world-centered reference frames are used during visual motion processing in scene regions, using objective ‘real’ motion and retinal motion during pursuit eye movements and natural scene stimuli. The third study investigates the effect of natural scene content during objective and retinal motion processing in motion regions. The last study investigates how motion speed is represented in motion regions during objective and retinal motion. Since many visual areas are optimized for natural visual stimuli, the speed responses were tested on Fourier scrambles of natural scene images in order to provide natural scene statistics as visual input. I found evidence that scene processing regions parahippocampal place area (PPA) and occipital place area (OPA) are motion responsive while retrosplenial cortex (RSC) is not. In addition, PPA’s motion responses are modulated by scene content. With respect to reference frames, I found that PPA prefers a world-centered reference frame while viewing dynamic scenes. The results from motion regions (MT/V5+, V3A, V6 and cingulate sulcus visual area (CSv)) revealed that motion responses of all of them are enhanced during exposure to scenes compared to Fourier-scramble, whereas only V3A responded also to static scenes. The last study showed that all motion responsive regions tested (MT/V5, MST, V3A, V6 and CSv) are modulated by motion speed but only V3A has a distinctly stronger speed tuning for objective compared to retinal motion. These results reveal that using natural scene stimuli is important while investigating self-motion responses in human brain: many scene regions are modulated by motion and one of them (PPA) even differentiates object motion from retinal motion. Conversely, many motion regions are modulated by scene content and one of them (V3A) is even responsive to still scenes. Moreover, the objective motion preference of V3A is even stronger during higher speeds. These results question a strong separation of ‘where’ and ‘what’ pathways and show that scene region PPA and motion region V3A have similar objective motion and scene preferences

    Neural correlates of implicit knowledge about statistical regularities

    Get PDF
    In this study, we examined the neural correlates of implicit knowledge about statistical regularities of temporal order and item chunks using functional magnetic resonance imaging (fMRI). In a familiarization scan, participants viewed a stream of scenes consisting of structured (i.e., three scenes were always presented in the same order) and random triplets. In the subsequent test scan, participants were required to detect a target scene. Test sequences included both forward order of scenes presented during the familiarization scan and backward order of scenes (i.e., reverse order of forward scenes). Behavioral results showed a learning effect of temporal order in the forward condition and scene chunks in the backward condition. fMRI data from the familiarization scan showed the difference of activations between the structured and random blocks in the left posterior cingulate cortex including the retrosplenial cortex. More important, in the test scan, we observed brain activities in the left parietal lobe when participants detected target scenes on temporal order information. In contrast, the left precuneus activated when participants detected target scenes based on scene chunks. Our findings help clarify the brain mechanisms of implicit knowledge about acquired regularities

    Revealing Connections in Object and Scene Processing Using Consecutive TMS and fMR-Adaptation

    Get PDF
    When processing the visual world, our brain must perform many computations that may occur across several regions. It is important to understand communications between regions in order to understand perceptual processes underlying processing of our environment. We sought to determine the connectivity of object and scene processing regions of the cortex, which are not fully established. In order to determine these connections repetitive transcranial magnetic stimulation (rTMS) and functional magnetic resonance-adaptation (fMR-A) were paired together. rTMS was applied to object-selective lateral occipital (LO) and scene-selective transverse occipital sulcus (TOS). Immediately after stimulation, participants underwent fMR-A, and pre- and post-TMS responses were compared. TMS disrupted remote regions revealing connections from LO and TOS to remote object and scene-selective regions in the occipital cortex. In addition, we report important neural correlates regarding the transference of object related information between modalities, from LO to outside the ventral network to parietal and frontal areas
    corecore