4,346 research outputs found
Patterns of neural response in scene-selective regions of the human brain are affected by low-level manipulations of spatial frequency
Neuroimaging studies have found distinct patterns of response to different categories of scenes. However, the relative importance of low-level image properties in generating these response patterns is not fully understood. To address this issue, we directly manipulated the low level properties of scenes in a way that preserved the ability to perceive the category. We then measured the effect of these manipulations on category-selective patterns of fMRI response in the PPA, RSC and OPA. In Experiment 1, a horizontal-pass or vertical-pass orientation filter was applied to images of indoor and natural scenes. The image filter did not have a large effect on the patterns of response. For example, vertical- and horizontal-pass filtered indoor images generated similar patterns of response. Similarly, vertical- and horizontal-pass filtered natural scenes generated similar patterns of response. In Experiment 2, low-pass or high-pass spatial frequency filters were applied to the images. We found that image filter had a marked effect on the patterns of response in scene-selective regions. For example, low-pass indoor images generated similar patterns of response to low-pass natural images. The effect of filter varied across different scene-selective regions, suggesting differences in the way that scenes are represented in these regions. These results indicate that patterns of response in scene-selective regions are sensitive to the low-level properties of the image, particularly the spatial frequency content
A data driven approach to understanding the organization of high-level visual cortex
The neural representation in scene-selective regions of human visual cortex, such as the PPA, has been linked to the semantic and categorical properties of the images. However, the extent to which patterns of neural response in these regions reflect more fundamental organizing principles is not yet clear. Existing studies generally employ stimulus conditions chosen by the experimenter, potentially obscuring the contribution of more basic stimulus dimensions. To address this issue, we used a data-driven approach to describe a large database of scenes (>100,000 images) in terms of their visual properties (orientation, spatial frequency, spatial location). K-means clustering was then used to select images from distinct regions of this feature space. Images in each cluster did not correspond to typical scene categories. Nevertheless, they elicited distinct patterns of neural response in the PPA. Moreover, the similarity of the neural response to different clusters in the PPA could be predicted by the similarity in their image properties. Interestingly, the neural response in the PPA was also predicted by perceptual responses to the scenes, but not by their semantic properties. These findings provide an image-based explanation for the emergence of higher-level representations in scene-selective regions of the human brain
Making sense of real-world scenes
To interact with the world, we have to make sense of the continuous sensory input conveying information about our environment. A recent surge of studies has investigated the processes enabling scene understanding, using increasingly complex stimuli and sophisticated analyses to highlight the visual features and brain regions involved. However, there are two major challenges to producing a comprehensive framework for scene understanding. First, scene perception is highly dynamic, subserving multiple behavioral goals. Second, a multitude of different visual properties co-occur across scenes and may be correlated or independent. We synthesize the recent literature and argue that for a complete view of scene understanding, it is necessary to account for both differing observer goals and the contribution of diverse scene properties
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Motion adaptation and attention: A critical review and meta-analysis
The motion aftereffect (MAE) provides a behavioural probe into the mechanisms underlying motion perception, and has been used to study the effects of attention on motion processing. Visual attention can enhance detection and discrimination of selected visual signals. However, the relationship between attention and motion processing remains contentious: not all studies find that attention increases MAEs. Our meta-analysis reveals several factors that explain superficially discrepant findings. Across studies (37 independent samples, 76 effects) motion adaptation was significantly and substantially enhanced by attention (Cohen's d = 1.12, p < .0001). The effect more than doubled when adapting to translating (vs. expanding or rotating) motion. Other factors affecting the attention-MAE relationship included stimulus size, eccentricity and speed. By considering these behavioural analyses alongside neurophysiological work, we conclude that feature-based (rather than spatial, or object-based) attention is the biggest driver of sensory adaptation. Comparisons between naïve and non-naïve observers, different response paradigms, and assessment of 'file-drawer effects' indicate that neither response bias nor publication bias are likely to have significantly inflated the estimated effect of attention
Recommended from our members
I expect, therefore I see: individual differences in visual awareness
Predictive processing theories posit that awareness of the visual world emerges as the brain engages in predictive inference about the causes of its sensory input. At each level of the processing hierarchy top-down predictions are corrected by bottom-up sensory prediction error to form behaviourally optimal inferences about the state of the visual world. Research suggests there may be individual differences in predictive processing mechanisms such that some individuals are more reliant on prior knowledge, whereas others assign more weight to sensory evidence. Predictive processing biases are thought to manifest in a range of typical and atypical perceptual experiences including proneness to perceptual illusions, sensory sensitivity in autism, and hallucinations in psychosis. The overarching aim of this thesis was to investigate whether in the general population predictive processing biases predict individual differences in visual awareness. Change blindness was selected as the central paradigm of investigation, as it can be conceptualised as a failure to incorporate a novel change into the current prediction about the state of the visual world.
The empirical work in Chapter 2 aimed to characterise individual differences in visual change detection using naturalistic scenes and to identify the perceptual and cognitive measures that predict noticing ability. There were reliable individual differences in change detection that generalised to ecologically valid displays. The ability to notice visual changes was predicted by the strength and stability of perceptual predictions, as measured by the accuracy of visual short-term memory and attentional control in the face of distractors.
In Chapter 3 I used voxel-based-morphometry to investigate whether inter-individual variability in brain structure predicts individual differences in visual awareness. The latter was assessed by the change blindness task as well as its strongest predictor measures (visual short-term memory, attentional capture, and perceptual rivalry). Regions of interest (ROIs) were selected in the parietal and visual cortices based on previous evidence that these areas are causally involved in the awareness of visual stimuli. This study aimed to discover whether the average grey matter density in the ROIs predict susceptibility to CB. The ROI-based analyses revealed the average grey matter density in left posterior parietal cortex predicted visual short-term memory accuracy but none of the other hypothesised relationships were significant.
Chapter 4 aimed to measure individual differences in the reliance on prior knowledge by employing the Mooney face detection task. In this task participants disambiguated faces in two-tone degraded images before and after the presentation of the original versions of the images. Better change detection was predicted by Mooney face detection without any prior knowledge of the images, a measure of ‘perceptual closure’ or an ability to generate a gestalt of a scene. The attention to detail subscale of the autism spectrum also predicted superior change detection. Reliance on prior knowledge in visual perception (assessed by improvement in Mooney face detection after seeing original images) did not consistently predict atypical perceptual experiences associated with the autism spectrum or schizotypy.
Chapter 5 was an investigation into, firstly, whether there is a general predictive processing bias, which manifests across different methods of inducing prior knowledge, or whether such a bias is paradigm-specific and, secondly, whether reliance on priors predicts perceptual experiences and traits. All prior manipulations in this study lead to an increased tendency to see the expected stimulus in a binocular rivalry display, except adaptation, which lead to a suppression of visual awareness. Attentional control, perceptual priming, expectancy, and imagery loaded onto a common factor, suggesting that the strength of selective attention is closely linked with the facilitatory effect of expectation. The strength of adaptation predicted superior change detection and perceptual priming predicted the propensity to experience perceptual illusions.
Taken together, these findings suggest that there are reliable individual differences in visual change detection, and these are predicted by the strength of visual short-term memory representations, attentional control, perceptual closure ability, as well as the strength of low-level adaptation. Possessing expectations facilitates the entry of the corresponding percept into awareness, irrespective of the method of prior induction. The facilitatory effect that priors exert on visual awareness across different methods is closely linked with the ability to exert attentional control. This suggests that the effects of expectations on awareness may be attentional. However, predictive processing biases were method-specific in that a facilitatory effect using one prior induction method will not necessarily predict the magnitude of the effect using a different method. Some prior effects (e.g., perceptual priming, imagery, and adaptation) yielded correlations with perceptual experiences and traits in the general population. As the research in this thesis is correlational, future studies will need to delineate the effects of expectation, attention, and adaptation on visual awareness and explore the neural representations of these mechanisms
The Neural Representation of Scenes in Visual Cortex
Recent neuroimaging studies have identified a number of regions in the human brain that respond preferentially to visual scenes. These regions are thought to underpin our ability to perceive and interact with our local visual environment. However, the precise stimulus dimensions underlying the function of scene-selective regions remain controversial. Some accounts have proposed an organisation based on relatively high-level semantic or categorical properties of the stimulus. However, other accounts have suggested that lower-level visual features of the stimulus may offer a more parsimonious explanation. This thesis presents a series of fMRI experiments employing multivariate pattern analyses (MVPA) in order to test the role of low-level visual properties in the function of scene-selective regions. The first empirical chapter presents two experiments showing that patterns of neural response to different scene categories can be predicted by a model of the visual properties of scenes (GIST). The second empirical chapter demonstrates that direct manipulations of the spatial frequency content of the image significantly influence the patterns of response, with effects often being comparable to or greater than those of scene category. The third empirical chapter demonstrates that distinct patterns of response can be found to different scene categories even when images are Fourier phase scrambled such that low-level visual features are preserved, but perception of the categories is impaired. The fourth and final empirical chapter presents an experiment using a data-driven method to select clusters of scenes objectively based on their visual properties. These visually defined clusters did not correspond to typical scene categories, but nevertheless elicited distinct patterns of neural response. Taken together, these results support the importance of low-level visual features in the functional organisation of scene-selective regions. Scene-selective responses may arise from the combined sensitivity to multiple visual features that are themselves predictive of scene content
Distinct and Convergent Visual Processing of High and Low Spatial Frequency Information in Faces
We tested for differential brain response to distinct spatial frequency (SF) components in faces. During a functional magnetic resonance imaging experiment, participants were presented with "hybrid” faces containing superimposed low and high SF information from different identities. We used a repetition paradigm where faces at either SF range were independently repeated or changed across consecutive trials. In addition, we manipulated which SF band was attended. Our results suggest that repetition and attention affected partly overlapping occipitotemporal regions but did not interact. Changes of high SF faces increased responses of the right inferior occipital gyrus (IOG) and left inferior temporal gyrus (ITG), with the latter response being also modulated additively by attention. In contrast, the bilateral middle occipital gyrus (MOG) responded to repetition and attention manipulations of low SF. A common effect of high and low SF repetition was observed in the right fusiform gyrus (FFG). Follow-up connectivity analyses suggested direct influence of the MOG (low SF), IOG, and ITG (high SF) on the FFG responses. Our results reveal that different regions within occipitotemporal cortex extract distinct visual cues at different SF ranges in faces and that the outputs from these separate processes project forward to the right FFG, where the different visual cues may converg
Recommended from our members
A tilt after-effect for images of buildings: Evidence of selectivity for the orientation of everyday scenes
The tilt after-effect (TAE) is thought to be a manifestation of gain control in mechanisms selective for spatial orientation in visual stimuli. It has been demonstrated with luminance-defined stripes, contrast-defined stripes, orientation-defined stripes, and even with natural images. Of course, all images can be decomposed into a sum of stripes, so it should not be surprising to find a TAE when adapting and test images contain stripes that differ by 15° or so. We show this latter condition is not necessary for the TAE with natural images: adaptation to slightly tilted and vertically filtered houses produced a “repulsive” bias in the perceived orientation of horizontally filtered houses. These results suggest gain control in mechanisms selective for spatial orientation in natural images
The Effect of Visual Perceptual Load on Auditory Processing
Many fundamental aspects of auditory processing occur even when we are not attending to the auditory environment. This has led to a popular belief that auditory signals are analysed in a largely pre-attentive manner, allowing hearing to serve as an early warning system. However, models of attention highlight that even processes that occur by default may rely on access to perceptual resources, and so can fail in situations when demand on sensory systems is particularly high. If this is the case for auditory processing, the classic paradigms employed in auditory attention research are not sufficient to distinguish between a process that is truly automatic (i.e., will occur regardless of any competing demands on sensory processing) and one that occurs passively (i.e., without explicit intent) but is dependent on resource-availability. An approach that addresses explicitly whether an aspect of auditory analysis is contingent on access to capacity-limited resources is to control the resources available to the process; this can be achieved by actively engaging attention in a different task that depletes perceptual capacity to a greater or lesser extent. If the critical auditory process is affected by manipulating the perceptual demands of the attended task this suggests that it is subject to the availability of processing resources; in contrast a process that is automatic should not be affected by the level of load in the attended task. This approach has been firmly established within vision, but has been used relatively little to explore auditory processing. In the experiments presented in this thesis, I use MEG, pupillometry and behavioural dual-task designs to explore how auditory processing is impacted by visual perceptual load. The MEG data presented illustrate that both the overall amplitude of auditory responses, and the computational capacity of the auditory system are affected by the degree of perceptual load in a concurrent visual task. These effects are mirrored by the pupillometry data in which pupil dilation is found to reflect both the degree of load in the attended visual task (with larger pupil dilation to the high compared to the low load visual load task), and the sensory processing of irrelevant auditory signals (with reduced dilation to sounds under high versus low visual load). The data highlight that previous assumptions that auditory processing can occur automatically may be too simplistic; in fact, though many aspects of auditory processing occur passively and benefit from the allocation of spare capacity, they are not strictly automatic. Moreover, the data indicate that the impact of visual load can be seen even on the early sensory cortical responses to sound, suggesting not only that cortical processing of auditory signals is dependent on the availability of resources, but also that these resources are part of a global pool shared between vision and audition
- …