4,346 research outputs found

    Patterns of neural response in scene-selective regions of the human brain are affected by low-level manipulations of spatial frequency

    Get PDF
    Neuroimaging studies have found distinct patterns of response to different categories of scenes. However, the relative importance of low-level image properties in generating these response patterns is not fully understood. To address this issue, we directly manipulated the low level properties of scenes in a way that preserved the ability to perceive the category. We then measured the effect of these manipulations on category-selective patterns of fMRI response in the PPA, RSC and OPA. In Experiment 1, a horizontal-pass or vertical-pass orientation filter was applied to images of indoor and natural scenes. The image filter did not have a large effect on the patterns of response. For example, vertical- and horizontal-pass filtered indoor images generated similar patterns of response. Similarly, vertical- and horizontal-pass filtered natural scenes generated similar patterns of response. In Experiment 2, low-pass or high-pass spatial frequency filters were applied to the images. We found that image filter had a marked effect on the patterns of response in scene-selective regions. For example, low-pass indoor images generated similar patterns of response to low-pass natural images. The effect of filter varied across different scene-selective regions, suggesting differences in the way that scenes are represented in these regions. These results indicate that patterns of response in scene-selective regions are sensitive to the low-level properties of the image, particularly the spatial frequency content

    A data driven approach to understanding the organization of high-level visual cortex

    Get PDF
    The neural representation in scene-selective regions of human visual cortex, such as the PPA, has been linked to the semantic and categorical properties of the images. However, the extent to which patterns of neural response in these regions reflect more fundamental organizing principles is not yet clear. Existing studies generally employ stimulus conditions chosen by the experimenter, potentially obscuring the contribution of more basic stimulus dimensions. To address this issue, we used a data-driven approach to describe a large database of scenes (>100,000 images) in terms of their visual properties (orientation, spatial frequency, spatial location). K-means clustering was then used to select images from distinct regions of this feature space. Images in each cluster did not correspond to typical scene categories. Nevertheless, they elicited distinct patterns of neural response in the PPA. Moreover, the similarity of the neural response to different clusters in the PPA could be predicted by the similarity in their image properties. Interestingly, the neural response in the PPA was also predicted by perceptual responses to the scenes, but not by their semantic properties. These findings provide an image-based explanation for the emergence of higher-level representations in scene-selective regions of the human brain

    Making sense of real-world scenes

    Get PDF
    To interact with the world, we have to make sense of the continuous sensory input conveying information about our environment. A recent surge of studies has investigated the processes enabling scene understanding, using increasingly complex stimuli and sophisticated analyses to highlight the visual features and brain regions involved. However, there are two major challenges to producing a comprehensive framework for scene understanding. First, scene perception is highly dynamic, subserving multiple behavioral goals. Second, a multitude of different visual properties co-occur across scenes and may be correlated or independent. We synthesize the recent literature and argue that for a complete view of scene understanding, it is necessary to account for both differing observer goals and the contribution of diverse scene properties

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Motion adaptation and attention: A critical review and meta-analysis

    Get PDF
    The motion aftereffect (MAE) provides a behavioural probe into the mechanisms underlying motion perception, and has been used to study the effects of attention on motion processing. Visual attention can enhance detection and discrimination of selected visual signals. However, the relationship between attention and motion processing remains contentious: not all studies find that attention increases MAEs. Our meta-analysis reveals several factors that explain superficially discrepant findings. Across studies (37 independent samples, 76 effects) motion adaptation was significantly and substantially enhanced by attention (Cohen's d = 1.12, p < .0001). The effect more than doubled when adapting to translating (vs. expanding or rotating) motion. Other factors affecting the attention-MAE relationship included stimulus size, eccentricity and speed. By considering these behavioural analyses alongside neurophysiological work, we conclude that feature-based (rather than spatial, or object-based) attention is the biggest driver of sensory adaptation. Comparisons between naïve and non-naïve observers, different response paradigms, and assessment of 'file-drawer effects' indicate that neither response bias nor publication bias are likely to have significantly inflated the estimated effect of attention

    The Neural Representation of Scenes in Visual Cortex

    Get PDF
    Recent neuroimaging studies have identified a number of regions in the human brain that respond preferentially to visual scenes. These regions are thought to underpin our ability to perceive and interact with our local visual environment. However, the precise stimulus dimensions underlying the function of scene-selective regions remain controversial. Some accounts have proposed an organisation based on relatively high-level semantic or categorical properties of the stimulus. However, other accounts have suggested that lower-level visual features of the stimulus may offer a more parsimonious explanation. This thesis presents a series of fMRI experiments employing multivariate pattern analyses (MVPA) in order to test the role of low-level visual properties in the function of scene-selective regions. The first empirical chapter presents two experiments showing that patterns of neural response to different scene categories can be predicted by a model of the visual properties of scenes (GIST). The second empirical chapter demonstrates that direct manipulations of the spatial frequency content of the image significantly influence the patterns of response, with effects often being comparable to or greater than those of scene category. The third empirical chapter demonstrates that distinct patterns of response can be found to different scene categories even when images are Fourier phase scrambled such that low-level visual features are preserved, but perception of the categories is impaired. The fourth and final empirical chapter presents an experiment using a data-driven method to select clusters of scenes objectively based on their visual properties. These visually defined clusters did not correspond to typical scene categories, but nevertheless elicited distinct patterns of neural response. Taken together, these results support the importance of low-level visual features in the functional organisation of scene-selective regions. Scene-selective responses may arise from the combined sensitivity to multiple visual features that are themselves predictive of scene content

    Distinct and Convergent Visual Processing of High and Low Spatial Frequency Information in Faces

    Get PDF
    We tested for differential brain response to distinct spatial frequency (SF) components in faces. During a functional magnetic resonance imaging experiment, participants were presented with "hybrid” faces containing superimposed low and high SF information from different identities. We used a repetition paradigm where faces at either SF range were independently repeated or changed across consecutive trials. In addition, we manipulated which SF band was attended. Our results suggest that repetition and attention affected partly overlapping occipitotemporal regions but did not interact. Changes of high SF faces increased responses of the right inferior occipital gyrus (IOG) and left inferior temporal gyrus (ITG), with the latter response being also modulated additively by attention. In contrast, the bilateral middle occipital gyrus (MOG) responded to repetition and attention manipulations of low SF. A common effect of high and low SF repetition was observed in the right fusiform gyrus (FFG). Follow-up connectivity analyses suggested direct influence of the MOG (low SF), IOG, and ITG (high SF) on the FFG responses. Our results reveal that different regions within occipitotemporal cortex extract distinct visual cues at different SF ranges in faces and that the outputs from these separate processes project forward to the right FFG, where the different visual cues may converg

    The Effect of Visual Perceptual Load on Auditory Processing

    Get PDF
    Many fundamental aspects of auditory processing occur even when we are not attending to the auditory environment. This has led to a popular belief that auditory signals are analysed in a largely pre-attentive manner, allowing hearing to serve as an early warning system. However, models of attention highlight that even processes that occur by default may rely on access to perceptual resources, and so can fail in situations when demand on sensory systems is particularly high. If this is the case for auditory processing, the classic paradigms employed in auditory attention research are not sufficient to distinguish between a process that is truly automatic (i.e., will occur regardless of any competing demands on sensory processing) and one that occurs passively (i.e., without explicit intent) but is dependent on resource-availability. An approach that addresses explicitly whether an aspect of auditory analysis is contingent on access to capacity-limited resources is to control the resources available to the process; this can be achieved by actively engaging attention in a different task that depletes perceptual capacity to a greater or lesser extent. If the critical auditory process is affected by manipulating the perceptual demands of the attended task this suggests that it is subject to the availability of processing resources; in contrast a process that is automatic should not be affected by the level of load in the attended task. This approach has been firmly established within vision, but has been used relatively little to explore auditory processing. In the experiments presented in this thesis, I use MEG, pupillometry and behavioural dual-task designs to explore how auditory processing is impacted by visual perceptual load. The MEG data presented illustrate that both the overall amplitude of auditory responses, and the computational capacity of the auditory system are affected by the degree of perceptual load in a concurrent visual task. These effects are mirrored by the pupillometry data in which pupil dilation is found to reflect both the degree of load in the attended visual task (with larger pupil dilation to the high compared to the low load visual load task), and the sensory processing of irrelevant auditory signals (with reduced dilation to sounds under high versus low visual load). The data highlight that previous assumptions that auditory processing can occur automatically may be too simplistic; in fact, though many aspects of auditory processing occur passively and benefit from the allocation of spare capacity, they are not strictly automatic. Moreover, the data indicate that the impact of visual load can be seen even on the early sensory cortical responses to sound, suggesting not only that cortical processing of auditory signals is dependent on the availability of resources, but also that these resources are part of a global pool shared between vision and audition
    corecore