2,717 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Top-down effects on early visual processing in humans: a predictive coding framework

    Get PDF
    An increasing number of human electroencephalography (EEG) studies examining the earliest component of the visual evoked potential, the so-called C1, have cast doubts on the previously prevalent notion that this component is impermeable to top-down effects. This article reviews the original studies that (i) described the C1, (ii) linked it to primary visual cortex (V1) activity, and (iii) suggested that its electrophysiological characteristics are exclusively determined by low-level stimulus attributes, particularly the spatial position of the stimulus within the visual field. We then describe conflicting evidence from animal studies and human neuroimaging experiments and provide an overview of recent EEG and magnetoencephalography (MEG) work showing that initial V1 activity in humans may be strongly modulated by higher-level cognitive factors. Finally, we formulate a theoretical framework for understanding top-down effects on early visual processing in terms of predictive coding

    The time course of cognitive control : behavioral and EEG studies

    Get PDF

    Attention allocation during the observation of biological motion: an EEG study

    Get PDF
    The processing of observed biological motion that is the movement of biological organisms has an important role in animals’ vigilance and survival. For humans, it is also implicated in the development of social cognition and communication, with infants showing preferential attention towards motion from an early age. Further, adults can extract a broad range of social information from the biological motion of human figures represented by dots of light (point-light displays), that contain kinematic, structural and dynamic information. From this information, humans can identify individual actors, their sex, emotional state (angry, happy, and sad) and walking direction even when obfuscated by additional noise. The processing of biological motion draws on different cognitive systems such as working memory, selective attention and sensorimotor processing. Humans demonstrate an attentional bias towards human forms and biological motion, compared to other non-biological stimuli, and the observation of biological movement activates sensorimotor cortical regions. Previous research has used EEG to measure mu frequency (~ 8-13 Hz) changes and to infer the activation of sensorimotor regions during biological movement observation. This sensorimotor activation is thought to be an indication of online movement simulation. It has been demonstrated that top-down attentional processes modulate the engagement of sensorimotor simulation during movement observation. What remains unknown is whether biological motion exogenously captures spatial attention and, in turn, modulates sensorimotor simulation; the current study sought to explore this question. In the current study, I used an attentional bias paradigm where movement and control point-light displays are presented laterally and simultaneously as irrelevant cues. Relatively decreased reaction times to subsequent targets that appear in the same location as a cue reflects preferential processing of that preceding cue. I simultaneously recorded EEG and calculated mu frequency changes at both central and occipital electrode locations. I find decreased reaction times and an increase in correct responses to targets that replace the scrambled point light display (PLD), which represents non-biological motion, compared to targets that replaced the coherent PLD representing biological movement. In addition, EEG analysis revealed a left hemisphere bias, with post hoc analysis revealing this bias is driven by the central electrodes; with a larger desynchronisation in the left central electrode compared to the right central electrode, whereas, occipital alpha was desynchronised symmetrically. Together, the behavioural and EEG findings suggest an inhibition of return (IOR) effect

    Interpreting EEG and MEG signal modulation in response to facial features: the influence of top-down task demands on visual processing strategies

    Get PDF
    The visual processing of faces is a fast and efficient feat that our visual system usually accomplishes many times a day. The N170 (an Event-Related Potential) and the M170 (an Event-Related Magnetic Field) are thought to be prominent markers of the face perception process in the ventral stream of visual processing that occur ~ 170 ms after stimulus onset. The question of whether face processing at the time window of the N170 and M170 is automatically driven by bottom-up visual processing only, or whether it is also modulated by top-down control, is still debated in the literature. However, it is known from research on general visual processing, that top-down control can be exerted much earlier along the visual processing stream than the N170 and M170 take place. I conducted two studies, each consisting of two face categorization tasks. In order to examine the influence of top-down control on the processing of faces, I changed the task demands from one task to the next, while presenting the same set of face stimuli. In the first study, I recorded participants’ EEG signal in response to faces while they performed both a Gender task and an Expression task on a set of expressive face stimuli. Analyses using Bubbles (Gosselin & Schyns, 2001) and Classification Image techniques revealed significant task modulations of the N170 ERPs (peaks and amplitudes) and the peak latency of maximum information sensitivity to key facial features. However, task demands did not change the information processing during the N170 with respect to behaviourally diagnostic information. Rather, the N170 seemed to integrate gender and expression diagnostic information equally in both tasks. In the second study, participants completed the same behavioural tasks as in the first study (Gender and Expression), but this time their MEG signal was recorded in order to allow for precise source localisation. After determining the active sources during the M170 time window, a Mutual Information analysis in connection with Bubbles was used to examine voxel sensitivity to both the task-relevant and the task-irrelevant face category. When a face category was relevant for the task, sensitivity to it was usually higher and peaked in different voxels than sensitivity to the task-irrelevant face category. In addition, voxels predictive of categorization accuracy were shown to be sensitive to task-relevant, behaviourally diagnostic facial features only. I conclude that facial feature integration during both N170 and M170 is subject to top-down control. The results are discussed against the background of known face processing models and current research findings on visual processing
    • …
    corecore