57 research outputs found
Event-related electroencephalographic lateralizations mark individual differences in spatial and nonspatial visual selection
Selective attention controls the distribution of our visual sys- tem's limited processing resources to stimuli in the visual field. Two independent parameters of visual selection can be quantified by modeling an individual's performance in a partial-report task based on the computational theory of visual attention (TVA): (i) top-down control α, the relative attentional weight- ing of relevant over irrelevant stimuli, and (ii) spatial bias wλ, the relative attentional weighting of stimuli in the left versus right hemifield. In this study, we found that visual event-related electroencephalographic lateralizations marked interindividual differences in these two functions. First, individuals with better top-down control showed higher amplitudes of the posterior contralateral negativity than individuals with poorer top-down control. Second, differences in spatial bias were reflected in asymmetries in earlier visual event-related lateralizations de- pending on the hemifield position of targets; specifically, individuals showed a positivity contralateral to targets presented in their prioritized hemifield and a negativity contralateral to targets presented in their nonprioritized hemifield. Thus, our findings demonstrate that two functionally different aspects of attentional weighting quantified in the respective TVA parameters are reflected in two different neurophysiological measures: The observer-dependent spatial bias influences selection by a bottom-up processing advantage of stimuli appearing in the prioritized hemifield. By contrast, task-related target selection governed by top-down control involves active enhancement of target, and/or suppression of distractor, processing. These results confirm basic assumptions of the TVA framework, complement the functional interpretation of event-related lateralization components in selective attention studies, and are of relevance for the development of neurocognitive attentional assessment procedures
Two independent frontal midline theta oscillations during conflict detection and adaptation in a Simon-type manual reaching task
© 2017 the authors. One of the most firmly established factors determining the speed of human behavioral responses toward action-critical stimuli is the spatial correspondence between the stimulus and response locations. If both locations match, the time taken for response production is markedly reduced relative to when they mismatch, a phenomenon called the Simon effect. While there is a consensus that this stimulus-response (S-R) conflict is associated with brief (4-7 Hz) frontal midline theta (fmθ) complexes generated in medial frontal cortex, it remains controversial (1) whether there are multiple, simultaneously active theta generator areas in the medial frontal cortex that commonly give rise to conflict-related fmθ complexes; and if so, (2) whether they are all related to the resolution of conflicting task information. Here, we combined mental chronometry with high-density electroencephalographic measures during a Simon-type manual reaching task and used independent component analysis and time-frequency domain statistics on source-level activities to model fmθ sources. During target processing, our results revealed two independent fmθ generators simultaneously active in or near anterior cingulate cortex, only one of them reflecting the correspondence between current and previous S-R locations. However, this fmθ response is not exclusively linked to conflict but also to other, conflict-independent processes associated with response slowing. These results paint a detailed picture regarding the oscillatory correlates of conflict processing in Simon tasks, and challenge the prevalent notion that fmθ complexes induced by conflicting task information represent a unitary phenomenon related to cognitive control, which governs conflict processing across various types of response-override tasks
Attentional capture in visual search: capture and post-capture dynamics revealed by EEG
Sometimes, salient-but-irrelevant objects (distractors) presented concurrently with a search target cannot be ignored and attention is involuntarily allocated towards the distractor first. Several studies have provided electrophysiological evidence for involuntary misallocations of attention towards a distractor, but much less is known about the mechanisms that are needed to overcome a misallocation and re-allocate attention towards the concurrently presented target. In our study, electrophysiological markers of attentional mechanisms indicate that (i) the distractor captures attention before the target is attended, (ii) a misallocation of attention is terminated actively (instead of attention fading passively), and (iii) the misallocation of attention towards a distractor delays the attention allocation towards the target (rather than just delaying some post-attentive process involved in response selection). This provides the most complete demonstration, to date, of the chain of attentional mechanisms that are evoked when attention is misguided and recovers from capture within a search display
The control of attentional target selection in a colour/colour conjunction task
To investigate the time course of attentional object selection processes in visual search tasks where targets are defined by a combination of features from the same dimension, we measured the N2pc component as an electrophysiological marker of attentional object selection during colour/colour conjunction search. In Experiment 1, participants searched for targets defined by a combination of two colours, while ignoring distractor objects that matched only one of these colours. Reliable N2pc components were triggered by targets and also by partially matching distractors, even when these distractors were accompanied by a target in the same display. The target N2pc was initially equal in size to the sum of the two N2pc components to the two different types of partially matching distractors, and became superadditive from about 250 ms after search display onset. Experiment 2 demonstrated that the superadditivity of the target N2pc was not due to a selective disengagement of attention from task-irrelevant partially matching distractors. These results indicate that attention was initially deployed separately and in parallel to all target-matching colours, before attentional allocation processes became sensitive to the presence of both matching colours within the same object. They suggest that attention can be controlled simultaneously and independently by multiple features from the same dimension, and that feature-guided attentional selection processes operate in parallel for different target-matching objects in the visual field
Salience-based selection: attentional capture by distractors less salient than the target
Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience
Saliency maps for finding changes in visual scenes?
Sudden changes in the environment reliably summon attention. This rapid change detection appears to operate in a similar fashion as pop-out in visual search, the phenomenon that very salient stimuli are directly attended, independently of the number of distracting objects. Pop-out is usually explained by the workings of saliency maps, i.e., map-like representations that code for the conspicuity at each location of the visual field. While past research emphasized similarities between pop-out search and change detection, our study highlights differences between the saliency computations in the two tasks: in contrast to pop-out search, saliency computation in change detection (i) operates independently across different stimulus properties (e.g., color and orientation), and (ii) is little influenced by trial history. These deviations from pop-out search are not due to idiosyncrasies of the stimuli or task design, as evidenced by a replication of standard findings in a comparable visual-search design. To explain these results, we outline a model of change detection involving the computation of feature-difference maps, which explains the known similarities and differences with visual search
Stimulus saliency modulates pre-attentive processing speed in human visual cortex
The notion of a saliency-based processing architecture [1] underlying human vision is central to a number of current theories of visual selective attention [e.g., 2]. On this view, focal-attention is guided by an overall-saliency map of the scene, which integrates (sums) signals from pre-attentive sensory feature-contrast computations (e. g., for color, motion, etc.). By linking the Posterior Contralateral Negativity (PCN) component to reaction time (RT) performance, we tested one specific prediction of such salience summation models: expedited shifts of focal-attention to targets with low, as compared to high, target-distracter similarity. For two feature-dimensions (color and orientation), we observed decreasing RTs with increasing target saliency. Importantly, this pattern was systematically mirrored by the timing, as well as amplitude, of the PCN. This pattern demonstrates that visual saliency is a key determinant of the time it takes for focal-attention to be engaged onto the target item, even when it is just a feature singleton
Searching for targets in visual working memory: investigating a dimensional feature bundle (DFB) model
The human visual working memory (WM) system enables us to store a limited amount of task-relevant visual information temporally in mind. One actively debated issue in cognitive neuroscience centers around the question of how this WM information is maintained. The currently dominant views advocated by prominent WM models hold that the units of memory are configured either as independent feature representations, integrated bound objects, or a combination of both. Here, we approached this issue by measuring lateralized brain electrical activity during a retro-cue paradigm, in order to track people's ability to access WM representations as a function of the dimensional relation between WM items and task settings. Both factors were revealed to selectively influence WM access: whereas cross relative to intradimensional WM targets gave rise to enhanced contralateral delay activity (CDA) amplitudes, localization relative to identification task demands yielded speeded CDA and manual response times. As these dimension-based findings are not reconcilable with contemporary feature- and/or object-based accounts, an alternative view that is based on the hierarchical feature-bundle model is proposed. We argue that WM units may consist of three hierarchically structured levels of representations, with an intermediate dimensionally organized level that mediates between top-level object and lower-level feature representations
Contralateral delay activity reveals dimension-based attentional orienting to locatios in visual working memory
In research on visual working memory (WM), a contentiously debated issue concerns whether or not items are stored independently of one another in WM. Here we addressed this issue by exploring the role of the physical context that surrounds a given item in the memory display in the formation of WM representations. In particular, we employed bilateral memory displays that contained two or three lateralized singleton items (together with six or five distractor items), defined either within the same or in different visual feature dimensions. After a variable interval, a retro-cue was presented centrally, requiring participants to discern the presence (vs. the absence) of this item in the previously shown memory array. Our results show that search for targets in visual WM is determined interactively by dimensional context and set size: For larger, but not smaller, set sizes, memory search slowed down when targets were defined across rather than within dimensions. This dimension-specific cost manifested in a stronger contralateral delay activity component, an established neural marker of the access to WM representations. Overall, our findings provide electrophysiological evidence for the hierarchically structured nature of WM representations, and they appear inconsistent with the view that WM items are encoded in isolation
- …