763 research outputs found

    Top-down effects on early visual processing in humans: a predictive coding framework

    Get PDF
    An increasing number of human electroencephalography (EEG) studies examining the earliest component of the visual evoked potential, the so-called C1, have cast doubts on the previously prevalent notion that this component is impermeable to top-down effects. This article reviews the original studies that (i) described the C1, (ii) linked it to primary visual cortex (V1) activity, and (iii) suggested that its electrophysiological characteristics are exclusively determined by low-level stimulus attributes, particularly the spatial position of the stimulus within the visual field. We then describe conflicting evidence from animal studies and human neuroimaging experiments and provide an overview of recent EEG and magnetoencephalography (MEG) work showing that initial V1 activity in humans may be strongly modulated by higher-level cognitive factors. Finally, we formulate a theoretical framework for understanding top-down effects on early visual processing in terms of predictive coding

    The integration of bottom-up and top-down signals in human perception in health and disease

    Get PDF
    To extract a meaningful visual experience from the information falling on the retina, the visual system must integrate signals from multiple levels. Bottom-up signals provide input relating to local features while top-down signals provide contextual feedback and reflect internal states of the organism. In this thesis I will explore the nature and neural basis of this integration in two key areas. I will examine perceptual filling-in of artificial scotomas to investigate the bottom-up signals causing changes in perception when filling-in takes place. I will then examine how this perceptual filling-in is modified by top-down signals reflecting attention and working memory. I will also investigate hemianopic completion, an unusual form of filling-in, which may reflect a breakdown in top-down feedback from higher visual areas. The second part of the thesis will explore a different form of top-down control of visual processing. While the effects of cognitive mechanisms such as attention on visual processing are well-characterised, other types of top-down signal such as reward outcome are less well explored. I will therefore study whether signals relating to reward can influence visual processing. To address these questions, I will employ a range of methodologies including functional MRI, magnetoencephalography and behavioural testing in healthy participants and patients with cortical damage. I will demonstrate that perceptual filling-in of artificial scotomas is largely a bottom-up process but that higher cognitive functions can modulate the phenomenon. I will also show that reward modulates activity in higher visual areas in the absence of concurrent visual stimulation and that receiving reward leads to enhanced activity in primary visual cortex on the next trial. These findings reveal that integration occurs across multiple levels even for processes rooted in early retinotopic regions, and that higher cognitive processes such as reward can influence the earliest stages of cortical visual processing

    The role of visual short term memory load in visual sensory detection

    Get PDF
    In this thesis I established the role of Visual Short-Term Memory (VSTM) load in visual detection while comparing to the roles of perceptual load and Working Memory (WM) cognitive control load. Participants performed a short-term memory task combined with a visual detection task (as well as attention task, Chapter 2) during the memory delay. The level and type of load was varied (perceptual load, VSTM load or WM cognitive control load). Measures of detection sensitivity demonstrated that increased VSTM load and perceptual load have both resulted in impaired detection sensitivity that was of equivalent magnitude. In contrast, increased WM cognitive control load had either no effect on detection or under some conditions (when the detection task was combined with an attention task of higher priority) resulted in enhanced detection sensitivity, the opposite effect to VSTM load. The contrasting effects of different types of memory load rule out alternative accounts in terms of general task difficulty. Other interpretations in terms of changes in attention deployment, response bias, task priorities, verbal strategies, were also ruled out. These VSTM load effects lasted over delays of 4 seconds, were generalized to foveal, parafoveal and peripheral stimuli, and were shown to be predicted from estimates of the effects of load on VSTM capacity. fMRI results (Chapter 4) showed that high VSTM load reduces retinotopic V1 responses to the detection stimulus and psychophysics experiments (Chapter 5) showed that high VSTM load resulted in reduced effective contrast of the detection stimulus. These results in this thesis clarify the distinct roles of WM maintenance processes from those of WM cognitive control process in visual detection. These findings provide further support to the sensory recruitment hypothesis of VSTM, clarify previous discrepancies in WM research and extend load theory to account for the effects of VSTM load on visual detection

    Effects of visual short-term memory load and attentional demand on the contrast response function

    Get PDF
    Visual short-term memory (VSTM) load leads to impaired perception during maintenance. Here, we fitted the contrast response function to psychometric orientation discrimination data while also varying attention demand during maintenance to investigate: (1) whether VSTM load effects on perception are mediated by a modulation of the contrast threshold, consistent with contrast gain accounts, or by the function asymptote (1 lapse rate), consistent with response gain accounts; and (2) whether the VSTM load effects on the contrast response function depend on the availability of attentional resources. We manipulated VSTM load via the number of items in the memory set in a color and location VSTM task and assessed the contrast response function for an orientation discrimination task during maintenance. Attention demand was varied through spatial cuing of the orientation stimulus. Higher VSTM load increased the estimated contrast threshold of the contrast response function without affecting the estimated asymptote, but only when the discrimination task demanded attention. When attentional demand was reduced (in the cued conditions), the VSTM load effects on the contrast threshold were eliminated. The results suggest that VSTM load reduces perceptual sensitivity by increasing contrast thresholds, suggestive of a contrast gain modulation mechanism, as long as the perceptual discrimination task demands attention. These findings support recent claims that attentional resources are shared between perception and VSTM maintenance processes

    Attention Trade-Off for Localization and Saccadic Remapping

    Get PDF
    Predictive remapping may be the principal mechanism of maintaining visual stability, and attention is crucial for this process. We aimed to investigate the role of attention in predictive remapping in a dual task paradigm with two conditions, with and without saccadic remapping. The first task was to remember the clock hand position either after a saccade to the clock face (saccade condition requiring remapping) or after the clock being displaced to the fixation point (fixation condition with no saccade). The second task was to report the remembered location of a dot shown peripherally in the upper screen for 1 s. We predicted that performance in the two tasks would interfere in the saccade condition, but not in the fixation condition, because of the attentional demands needed for remapping with the saccade. For the clock estimation task, answers in the saccadic trials tended to underestimate the actual position by approximately 37 ms while responses in the fixation trials were closer to veridical. As predicted, the findings also revealed significant interaction between the two tasks showing decreased predicted accuracy in the clock task for increased error in the localization task, but only for the saccadic condition. Taken together, these results point at the key role of attention in predictive remapping

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Dissociable roles of different types of working memory load in visual detection

    Get PDF
    We contrasted the effects of different types of working memory (WM) load on detection. Considering the sensory-recruitment hypothesis of visual short-term memory (VSTM) within load theory (e.g., Lavie, 2010) led us to predict that VSTM load would reduce visual-representation capacity, thus leading to reduced detection sensitivity during maintenance, whereas load on WM cognitive control processes would reduce priority-based control, thus leading to enhanced detection sensitivity for a low-priority stimulus. During the retention interval of a WM task, participants performed a visual-search task while also asked to detect a masked stimulus in the periphery. Loading WM cognitive control processes (with the demand to maintain a random digit order [vs. fixed in conditions of low load]) led to enhanced detection sensitivity. In contrast, loading VSTM (with the demand to maintain the color and positions of six squares [vs. one in conditions of low load]) reduced detection sensitivity, an effect comparable with that found for manipulating perceptual load in the search task. The results confirmed our predictions and established a new functional dissociation between the roles of different types of WM load in the fundamental visual perception process of detection

    Flexible recruitment of cortical networks in visual and auditory attention

    Full text link
    Our senses, while limited, shape our perception of the world and contribute to the functional architecture of the brain. This dissertation investigates the role of sensory modality and task demands in the cortical organization of healthy human adults using functional magnetic resonance imaging (fMRI). This research provides evidence for sensory modality bias in frontal cortical regions by directly contrasting auditory and visual sustained attention. This contrast revealed two distinct visual-biased regions in lateral frontal cortex - superior and inferior precentral sulcus (sPCS, iPCS) - anatomically interleaved with two auditory-biased regions - transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic (resting-state) functional connectivity analysis demonstrated that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Unisensory (auditory or visual) short-term memory (STM) tasks assessed the flexible recruitment of these sensory-biased cortical regions by varying information domain demands (e.g., spatial, temporal). While both modalities provide spatial and temporal information, vision has greater spatial resolution than audition, and audition has excellent temporal precision relative to vision. A visual temporal, but not a spatial, STM task flexibly recruited frontal auditory-biased regions; conversely, an auditory spatial task more strongly recruited frontal visual-biased regions compared to an auditory temporal task. This flexible recruitment extended to an auditory-biased superior temporal lobe region and to a subset of visual-biased parietal regions. A demanding auditory spatial STM task recruited anterior/superior visuotopic maps (IPS2-4, SPL1) along the intraparietal sulcus, but neither spatial nor temporal auditory tasks recruited posterior/interior maps. Finally, a comparison of visual spatial attention and STM under varied cognitive load demands attempted to further elucidate the organization of posterior parietal cortex. Parietal visuotopic maps were recruited for both visual spatial attention and working memory but demonstrated a graded response to task demands. Posterior/inferior maps (IPS0-1) demonstrated a linear relationship with the number of items attended to or remembered in the visual spatial tasks. Anterior/superior maps (IPS2-4, SPL1) demonstrated a general recruitment in visual spatial cognitive tasks, with a stronger response for visual spatial attention compared to STM

    A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

    Get PDF
    Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors
    corecore