5 research outputs found

    Spatial frequency supports the emergence of categorical representations in visual cortex during natural scene perception

    Get PDF
    In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception

    Real-world structure facilitates the rapid emergence of scene category information in visual brain signals

    Get PDF
    In everyday life, our visual surroundings are not arranged randomly, but structured in predictable ways. Although previous studies have shown that the visual system is sensitive to such structural regularities, it remains unclear whether the presence of an intact structure in a scene also facilitates the cortical analysis of the scene's categorical content. To address this question, we conducted an EEG experiment during which participants viewed natural scene images that were either "intact" (with their quadrants arranged in typical positions) or "jumbled" (with their quadrants arranged into atypical positions). We then used multivariate pattern analysis to decode the scenes' category from the EEG signals (e.g., whether the participant had seen a church or a supermarket). The category of intact scenes could be decoded rapidly within the first 100ms of visual processing. Critically, within 200ms of processing category decoding was more pronounced for the intact scenes compared to the jumbled scenes, suggesting that the presence of real-world structure facilitates the extraction of scene category information. No such effect was found when the scenes were presented upside-down, indicating that the facilitation of neural category information is indeed linked to a scene's adherence to typical real-world structure, rather than to differences in visual features between intact and jumbled scenes. Our results demonstrate that early stages of categorical analysis in the visual system exhibit tuning to the structure of the world that may facilitate the rapid extraction of behaviorally relevant information from rich natural environments

    Building Mental Experiences: From Scenes to Events

    Get PDF
    Mental events are central to everyday cognition, be it our continuous perception of the world, recalling autobiographical memories, or imagining the future. Little is known about the fine-grained temporal dynamics of these processes. Given the apparent predominance of scene imagery across cognition, in this thesis I used magnetoencephalography to investigate whether and how activity in the hippocampus and ventromedial prefrontal cortex (vmPFC) supports the mental construction of scenes and the events to which they give rise. In the first experiment, participants gradually imagined scenes and also closely matched non-scene arrays; this allowed me to assess whether any brain regions showed preferential responses to scene imagery. The anterior hippocampus and vmPFC were particularly engaged by the construction of scene imagery, with the vmPFC driving hippocampal activity. In the second experiment, I found that certain objects – those that were space-defining – preferentially engaged the vmPFC and superior temporal gyrus during scene construction, providing insight into how objects affect the creation of scene representations. The third experiment involved boundary extension during scene perception, permitting me to examine how single scenes might be prepared for inclusion into events. I observed changes in evoked responses just 12.5-58 ms after scene onset over fronto-temporal sensors, with again the vmPFC exerting a driving influence on other brain regions, including the hippocampus. In the final experiment, participants watched brief movies of events built from a series of scenes or non-scene patterns. A difference in evoked responses between the two event types emerged during the first frame of the movies, the primary source of which was shown to be the hippocampus. The enduring theme of the results across experiments was scene-specific engagement of the hippocampus and vmPFC, with the latter being the driving influence. Overall, this thesis provides insights into the neural dynamics of how scenes are built, made ready for inclusion into unfolding mental episodes, and then linked to produce our seamless experience of the world

    Top-down amplification of predicted visual input behind a frosted occluder

    Get PDF
    This thesis is comprised of five chapters. It includes two experimental chapters in which I detail both psychophysical and fMRI studies carried out at the University of Glasgow as part of this PhD project. These are followed by a literature review which outlines the implementation of ultra-high-resolution fMRI, both generally within the field and within a specific project proposal. Chapter 1 is a general introduction. I outline the broad organisation and basic functions of the visual system at the pre-cortical and cortical stages, in turn. I then discuss the concept of feedback within the visual system, outlining what feedback is, what it does and how it is implemented before outlining the rationale for the thesis. Chapter 2 is an experimental chapter detailing a series of psychophysical experiments. These experiments employ a partial occlusion paradigm to explore how top-down predicted information can influence the processing of degraded feedforward input. Throughout the experimental series, different aspects of this question are addressed in order to investigate whether the consistency of contextual information influences the detection and/or recognition of low-contrast visual scenes. Chapter 3 is another experimental chapter which details two 3T fMRI experiments. These projects also employed a partial occlusion paradigm to investigate contextual modulation on the processing of degraded feedforward input at the neuronal level in early visual cortex. Both univariate and multivariate analysis techniques were used to reveal the impact of consistency within top-down information. Chapter 4 contains a literature review which looks into ultra-high-resolution fMRI. Here, I detail the motivation behind the development of higher resolution imaging as well as potential confounds and limitations. I also outline adaptations required at higher resolution in terms of data acquisition and analysis as well as briefly exploring layer-specific findings within the visual cortex. Finally, I propose a 7T fMRI project that would continue to explore the influence of top-down predictions on the processing of degraded visual input by expanding the investigation to a laminar level. Chapter 5 is a general discussion which summarises the key points from each of the previous chapters and briefly discusses their conceptual relation to the current field and beyond
    corecore