322 research outputs found

    The integration of bottom-up and top-down signals in human perception in health and disease

    Get PDF
    To extract a meaningful visual experience from the information falling on the retina, the visual system must integrate signals from multiple levels. Bottom-up signals provide input relating to local features while top-down signals provide contextual feedback and reflect internal states of the organism. In this thesis I will explore the nature and neural basis of this integration in two key areas. I will examine perceptual filling-in of artificial scotomas to investigate the bottom-up signals causing changes in perception when filling-in takes place. I will then examine how this perceptual filling-in is modified by top-down signals reflecting attention and working memory. I will also investigate hemianopic completion, an unusual form of filling-in, which may reflect a breakdown in top-down feedback from higher visual areas. The second part of the thesis will explore a different form of top-down control of visual processing. While the effects of cognitive mechanisms such as attention on visual processing are well-characterised, other types of top-down signal such as reward outcome are less well explored. I will therefore study whether signals relating to reward can influence visual processing. To address these questions, I will employ a range of methodologies including functional MRI, magnetoencephalography and behavioural testing in healthy participants and patients with cortical damage. I will demonstrate that perceptual filling-in of artificial scotomas is largely a bottom-up process but that higher cognitive functions can modulate the phenomenon. I will also show that reward modulates activity in higher visual areas in the absence of concurrent visual stimulation and that receiving reward leads to enhanced activity in primary visual cortex on the next trial. These findings reveal that integration occurs across multiple levels even for processes rooted in early retinotopic regions, and that higher cognitive processes such as reward can influence the earliest stages of cortical visual processing

    The role of oscillatory brain activity in object processing and figure-ground segmentation in human vision

    Get PDF
    The perception of an object as a single entity within a visual scene requires that its features are bound together and segregated from the background and/or other objects. Here, we used magnetoencephalography (MEG) to assess the hypothesis that coherent percepts may arise from the synchronized high frequency (gamma) activity between neurons that code features of the same object. We also assessed the role of low frequency (alpha, beta) activity in object processing. The target stimulus (i.e. object) was a small patch of a concentric grating of 3. c/°, viewed eccentrically. The background stimulus was either a blank field or a concentric grating of 3. c/° periodicity, viewed centrally. With patterned backgrounds, the target stimulus emerged - through rotation about its own centre - as a circular subsection of the background. Data were acquired using a 275-channel whole-head MEG system and analyzed using Synthetic Aperture Magnetometry (SAM), which allows one to generate images of task-related cortical oscillatory power changes within specific frequency bands. Significant oscillatory activity across a broad range of frequencies was evident at the V1/V2 border, and subsequent analyses were based on a virtual electrode at this location. When the target was presented in isolation, we observed that: (i) contralateral stimulation yielded a sustained power increase in gamma activity; and (ii) both contra- and ipsilateral stimulation yielded near identical transient power changes in alpha (and beta) activity. When the target was presented against a patterned background, we observed that: (i) contralateral stimulation yielded an increase in high-gamma (> 55. Hz) power together with a decrease in low-gamma (40-55. Hz) power; and (ii) both contra- and ipsilateral stimulation yielded a transient decrease in alpha (and beta) activity, though the reduction tended to be greatest for contralateral stimulation. The opposing power changes across different regions of the gamma spectrum with 'figure/ground' stimulation suggest a possible dual role for gamma rhythms in visual object coding, and provide general support of the binding-by-synchronization hypothesis. As the power changes in alpha and beta activity were largely independent of the spatial location of the target, however, we conclude that their role in object processing may relate principally to changes in visual attention. © 2010 Elsevier B.V

    The Fate of Visible Features of Invisible Elements

    Get PDF
    To investigate the integration of features, we have developed a paradigm in which an element is rendered invisible by visual masking. Still, the features of the element are visible as part of other display elements presented at different locations and times (sequential metacontrast). In this sense, we can “transport” features non-retinotopically across space and time. The features of the invisible element integrate with features of other elements if and only if the elements belong to the same spatio-temporal group. The mechanisms of this kind of feature integration seem to be quite different from classical mechanisms proposed for feature binding. We propose that feature processing, binding, and integration occur concurrently during processes that group elements into wholes

    Positive emotion broadens attention focus through decreased position-specific spatial encoding in early visual cortex: evidence from ERPs

    Get PDF
    Recent evidence has suggested that not only stimulus-specific attributes or top-down expectations can modulate attention selection processes, but also the actual mood state of the participant. In this study, we tested the prediction that the induction of positive mood can dynamically influence attention allocation and, in turn, modulate early stimulus sensory processing in primary visual cortex (V1). High-density visual event-related potentials (ERPs) were recorded while participants performed a demanding task at fixation and were presented with peripheral irrelevant visual textures, whose position was systematically varied in the upper visual field (close, medium, or far relative to fixation). Either a neutral or a positive mood was reliably induced and maintained throughout the experimental session. The ERP results showed that the earliest retinotopic component following stimulus onset (C1) strongly varied in topography as a function of the position of the peripheral distractor, in agreement with a near-far spatial gradient. However, this effect was altered for participants in a positive relative to a neutral mood. On the contrary, positive mood did not modulate attention allocation for the central (task-relevant) stimuli, as reflected by the P300 component. We ran a control behavioral experiment confirming that positive emotion selectively impaired attention allocation to the peripheral distractors. These results suggest a mood-dependent tuning of position-specific encoding in V1 rapidly following stimulus onset. We discuss these results against the dominant broaden-and-build theory

    Form perception and neural feedback: insights from V1 and V2

    Get PDF
    In the brain, every cortical inter-area feedforward projection shares a reciprocal feedback connection. Despite its pervasive nature in the brain, our understanding of the functional role of neural feedback in form perception remains incomplete, particularly in behaving animals. This problem is addressed in humans with a novel form completion paradigm. Seven subjects (5 female) had their EEG waveforms analysed using three linear models showing non-significant differences between stimulus conditions designed to produce differences by manipulating neural feedback to V1. Two of these subjects (one female), in addition to EEG waveforms, had combined magnetic resonance imaging (MRI) and functional MRI (fMRI) cortical maps that allowed anatomically close areas such as V1 and V2 to have their signals decomposed and neural feedback inferred. Differences between stimulus conditions arose once signals had been divided into V1 and V2. Significant differences (p < .05) for one subject in V1 and V2 suggests cortical interactions at 100ms and 350ms. This suggests the form completion paradigm has utility at investigating the influence of the V2 far receptive field surround on V1, given future given signal to noise issues are resolved

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Tremotopic mapping of the human thalamic reticular nucleus

    Get PDF
    The thalamic reticular nucleus is an important structure in the mammalian brain, participating in the coordination of large-scale processes such as sleep and attention. To date, this structure has not been investigated in the human brain. I developed a series of methods for anatomically and functionally localizing the visual regions of the thalamic reticular nucleus in the human brain using magnetic resonance imaging and the presentation of various flicker frequencies. First, I describe the results obtained from a modified retinotopy analysis. I next apply network theory to the data in an attempt to localize the TRN is a data-driven way. Third, I describe a lateral-inhibitory network the TRN participates in. I conclude the TRN plays a role in regulating interhemispheric activity in the brain, and that flicker can be used to probe the resonance properties of neural populations with magnetic resonance imaging

    Top-down signals in visual selective attention.

    Get PDF
    This thesis describes experimental work on the brain mechanisms underlying human visual selective attention, with a focus on top-down activity changes in visual cortex. Using a combination of methods, the experiments addressed related questions concerning the functional significance and putative origins of such activity modulations due to selective attention. More specifically, the experiment described in Chapter 2 shows with TMS-elicited phosphenes that anticipatory selective attention can change excitability of visual cortex in a spatially-specific manner, even when thalamic gating of afferent input is ruled out. The behavioural and fMRI experiments described in Chapter 3 indicate that top-down influences of selective attention are not limited to enhancements of visual target processing, but may also involve anticipatory processes that minimize the impact of visual distractor stimuli. Chapters 4-6 then address questions about potential origins of such top-down activity modulations in visual cortex, using concurrent TMS-fMRI and psychophysics. These experiments show that TMS applied to the right human frontal eye field can causally influence visual cortex activity in a spatially-specific manner (Chapter 4), which has direct functional consequences for visual perception (Chapter 5), and is reliably different from that caused by TMS to the right intra-parietal sulcus (Chapter 6). The data presented in this thesis indicate that visual selective attention may involve top-down signals that bias visual processing towards behaviourally relevant stimuli, at the expense of distracting information present in the scene. Moreover, the experiments provide causal evidence in the human brain that distinct top-down signals can originate in anatomical feedback loops from frontal or parietal areas, and that such regions may have different functional influences on visual processing. These findings provide neural confirmation for some theoretical proposals in the literature on visual selective attention, and they introduce and corroborate new methods that might be of considerable utility for addressing such mechanisms directly

    Diagnostic information use to understand brain mechanisms of facial expression categorization

    Get PDF
    Proficient categorization of facial expressions is crucial for normal social interaction. Neurophysiological, behavioural, event-related potential, lesion and functional neuroimaging techniques can be used to investigate the underlying brain mechanisms supporting this seemingly effortless process, and the associated arrangement of bilateral networks. These brain areas exhibit consistent and replicable activation patterns, and can be broadly defined to include visual (occipital and temporal), limbic (amygdala) and prefrontal (orbitofrontal) regions. Together, these areas support early perceptual processing, the formation of detailed representations and subsequent recognition of expressive faces. Despite the critical role of facial expressions in social communication and extensive work in this area, it is still not known how the brain decodes nonverbal signals in terms of expression-specific features. For these reasons, this thesis investigates the role of these so-called diagnostic facial features at three significant stages in expression recognition; the spatiotemporal inputs to the visual system, the dynamic integration of features in higher visual (occipitotemporal) areas, and early sensitivity to features in V1. In Chapter 1, the basic emotion categories are presented, along with the brain regions that are activated by these expressions. In line with this, the current cognitive theory of face processing reviews functional and anatomical dissociations within the distributed neural “face network”. Chapter 1 also introduces the way in which we measure and use diagnostic information to derive brain sensitivity to specific facial features, and how this is a useful tool by which to understand spatial and temporal organisation of expression recognition in the brain. In relation to this, hierarchical, bottom-up neural processing is discussed along with high-level, top-down facilitatory mechanisms. Chapter 2 describes an eye-movement study that reveals inputs to the visual system via fixations reflect diagnostic information use. Inputs to the visual system dictate the information distributed to cognitive systems during the seamless and rapid categorization of expressive faces. How we perform eye-movements during this task informs how task-driven and stimulus-driven mechanisms interact to guide the extraction of information supporting recognition. We recorded eye movements of observers who categorized the six basic categories of facial expressions. We use a measure of task-relevant information (diagnosticity) to discuss oculomotor behaviour, with focus on two findings. Firstly, fixated regions reveal expression differences. Secondly, by examining fixation sequences, the intersection of fixations with diagnostic information increases in a sequence of fixations. This suggests a top-down drive to acquire task-relevant information, with different functional roles for first and final fixations. A combination of psychophysical studies of visual recognition together with the EEG (electroencephalogram) signal is used to infer the dynamics of feature extraction and use during the recognition of facial expressions in Chapter 3. The results reveal a process that integrates visual information over about 50 milliseconds prior to the face-sensitive N170 event-related potential, starting at the eye region, and proceeding gradually towards lower regions. The finding that informative features for recognition are not processed simultaneously but in an orderly progression over a short time period is instructive for understanding the processes involved in visual recognition, and in particular the integration of bottom-up and top-down processes. In Chapter 4 we use fMRI to investigate the task-dependent activation to diagnostic features in early visual areas, suggesting top-down mechanisms as V1 traditionally exhibits only simple response properties. Chapter 3 revealed that diagnostic features modulate the temporal dynamics of brain signals in higher visual areas. Within the hierarchical visual system however, it is not known if an early (V1/V2/V3) sensitivity to diagnostic information contributes to categorical facial judgements, conceivably driven by top-down signals triggered in visual processing. Using retinotopic mapping, we reveal task-dependent information extraction within the earliest cortical representation (V1) of two features known to be differentially necessary for face recognition tasks (eyes and mouth). This strategic encoding of face images is beyond typical V1 properties and suggests a top-down influence of task extending down to the earliest retinotopic stages of visual processing. The significance of these data is discussed in the context of the cortical face network and bidirectional processing in the visual system. The visual cognition of facial expression processing is concerned with the interactive processing of bottom-up sensory-driven information and top-down mechanisms to relate visual input to categorical judgements. The three experiments presented in this thesis are summarized in Chapter 5 in relation to how diagnostic features can be used to explore such processing in the human brain leading to proficient facial expression categorization
    corecore