1,912 research outputs found

    Salience-based selection: attentional capture by distractors less salient than the target

    Get PDF
    Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience

    Reflexive and preparatory selection and suppression of salient information in the right and left posterior parietal cortex

    Get PDF
    Attentional cues can trigger activity in the parietal cortex in anticipation of visual displays, and this activity may, in turn, induce changes in other areas of the visual cortex, hence, implementing attentional selection. In a recent TMS study [Mevorach, C., Humphreys, G. W., & Shalev, L. Opposite biases in salience-based selection for the left and right posterior parietal cortex. Nature Neuroscience, 9, 740-742, 2006b], it was shown that the posterior parietal cortex (PPC) can utilize the relative saliency (a nonspatial property) of a target and a distractor to bias visual selection. Furthermore, selection was lateralized so that the right PPC is engaged when salient information must be selected and the left PPC when the salient information must be ignored. However, it is not clear how the PPC implements these complementary forms of selection. Here we used on-line triple-pulse TMS over the right or left PPC prior to or after the onset of global/local displays. When delivered after the onset of the display, TMS to the right PPC disrupted the selection of the more salient aspect of the hierarchical letter. In contrast, left PPC TMS delivered prior to the onset of the stimulus disrupted responses to the lower saliency stimulus. These findings suggest that selection and suppression of saliency, rather than being "two sides of the same coin," are fundamentally different processes. Selection of saliency seems to operate reflexively, whereas suppression of saliency relies on a preparatory phase that "sets up" the system in order to effectively ignore saliency

    N2pc and attentional capture by colour and orientation-singletons in pure and mixed visual search tasks

    Get PDF
    The capture of attention by singleton stimuli in visual search is a matter of contention. Some authors propose that singletons capture attention in a bottom–up fashion if they are salient. Others propose that capture is contingent upon whether or not the stimuli share task-relevant attributes with the target. This study assessed N2pc elicited by colour and orientation singletons in a mixed task (the singleton defined as target changed block-to-block), and a pure task (the target was the same across the whole task). Both singletons elicited N2pc when acting as targets; when acting as non-targets, orientation singletons elicited N2pc only in the mixed task. The results suggest that the singletons were not salient enough to engage attention in a purely bottom–up fashion. Elicitation of N2pc by non-targets in the mixed task should be attributed to top–down processes associated with the current task. Stimuli that act as targets in part of the blocks become not completely irrelevant when non-targetsThis research was supported by a grant from the Spain's Ministry of Education and Sciences (SEJ 2007-61397) at the University of Santiago de CompostelaS

    The role of multisensory integration in the bottom-up and top-down control of attentional object selection

    Get PDF
    Selective spatial attention and multisensory integration have been traditionally considered as separate domains in psychology and cognitive neuroscience. However, theoretical and methodological advancements in the last two decades have paved the way for studying different types of interactions between spatial attention and multisensory integration. In the present thesis, two types of such interactions are investigated. In the first part of the thesis, the role of audiovisual synchrony as a source of bottom-up bias in visual selection was investigated. In six out of seven experiments, a variant of the spatial cueing paradigm was used to compare attentional capture by visual and audiovisual distractors. In another experiment, single-frame search arrays were presented to investigate whether multisensory integration can bias spatial selection via salience-based mechanisms. Behavioural and electrophysiological results demonstrated that the ability of visual objects to capture attention was enhanced when they were accompanied by noninformative auditory signals. They also showed evidence for the bottom-up nature of these audiovisual enhancements of attentional capture by revealing that these enhancements occurred irrespective of the task-relevance of visual objects. In the second part of this thesis, four experiments are reported that investigated the spatial selection of audiovisual relative to visual objects and the guidance of their selection by bimodal object templates. Behavioural and ERP results demonstrated that the ability of task-irrelevant target-matching visual objects to capture attention was reduced during search for audiovisual as compared to purely visual targets, suggesting that bimodal search is guided by integrated audiovisual templates. However, the observation that unimodal targetmatching visual events retained some ability to capture attention indicates that bimodal search is controlled to some extent by modality-specific representations of task-relevant information. In summary, the present thesis has contributed to our knowledge of how attention is controlled in real-life environments by demonstrating that spatial selective attention can be biased towards bimodal objects via salience-driven as well as goal-based mechanisms

    The role of multisensory integration in the bottom-up and top-down control of attentional object selection

    Get PDF
    Selective spatial attention and multisensory integration have been traditionally considered as separate domains in psychology and cognitive neuroscience. However, theoretical and methodological advancements in the last two decades have paved the way for studying different types of interactions between spatial attention and multisensory integration. In the present thesis, two types of such interactions are investigated. In the first part of the thesis, the role of audiovisual synchrony as a source of bottom-up bias in visual selection was investigated. In six out of seven experiments, a variant of the spatial cueing paradigm was used to compare attentional capture by visual and audiovisual distractors. In another experiment, single-frame search arrays were presented to investigate whether multisensory integration can bias spatial selection via salience-based mechanisms. Behavioural and electrophysiological results demonstrated that the ability of visual objects to capture attention was enhanced when they were accompanied by noninformative auditory signals. They also showed evidence for the bottom-up nature of these audiovisual enhancements of attentional capture by revealing that these enhancements occurred irrespective of the task-relevance of visual objects. In the second part of this thesis, four experiments are reported that investigated the spatial selection of audiovisual relative to visual objects and the guidance of their selection by bimodal object templates. Behavioural and ERP results demonstrated that the ability of task-irrelevant target-matching visual objects to capture attention was reduced during search for audiovisual as compared to purely visual targets, suggesting that bimodal search is guided by integrated audiovisual templates. However, the observation that unimodal targetmatching visual events retained some ability to capture attention indicates that bimodal search is controlled to some extent by modality-specific representations of task-relevant information. In summary, the present thesis has contributed to our knowledge of how attention is controlled in real-life environments by demonstrating that spatial selective attention can be biased towards bimodal objects via salience-driven as well as goal-based mechanisms

    Visual marking and facial affect : can an emotional face be ignored?

    Get PDF
    Previewing a set of distractors allows them to be ignored in a subsequent visual search task (Watson & Humphreys, 1997). Seven experiments investigated whether this preview benefit can be obtained with emotional faces, and whether negative and positive facial expressions differ in the extent to which they can be ignored. Experiments 1–5 examined the preview benefit with neutral, negative, and positive previewed faces. These results showed that a partial preview benefit occurs with face stimuli, but that the valence of the previewed faces has little impact. Experiments 6 and 7 examined the time course of the preview benefit with valenced faces. These showed that negative faces were more difficult to ignore than positive faces, but only at short preview durations. Furthermore, a full preview benefit was not obtained with face stimuli even when the preview duration was extended up to 3 s. The findings are discussed in terms of the processes underlying the preview benefit, their ecological sensitivity, and the role of emotional valence in attentional capture and guidance

    Distracted by your mind? Individual differences in distractibility predict mind wandering

    Get PDF
    Attention may be distracted from its intended focus both by stimuli in the external environment and by internally generated task-unrelated thoughts during mind wandering. However, previous attention research has focused almost exclusively on distraction by external stimuli, and the extent to which mind wandering relates to external distraction is as yet unclear. In the present study, the authors examined the relationship between individual differences in mind wandering and in the magnitude of distraction by either response-competing distractors or salient response-unrelated and task-irrelevant distractors. Self-reported susceptibility to mind wandering was found to positively correlate with task-irrelevant distraction but not with response-competition interference. These results reveal mind wandering as a manifestation of susceptibility to task-irrelevant distraction and establish a laboratory measure of general susceptibility to irrelevant distraction, including both internal and external sources

    Biasing Allocations of Attention via Selective Weighting of Saliency Signals: Behavioral and Neuroimaging Evidence for the Dimension-Weighting Account

    Get PDF
    Objects that stand out from the environment tend to be of behavioral relevance, and the visual system is tuned to preferably process these salient objects by allocating focused attention. However, attention is not just passively (bottom-up) driven by stimulus features, but previous experiences and task goals exert strong biases toward attending or actively ignoring salient objects. The core and eponymous assumption of the dimension-weighting account (DWA) is that these top-down biases are not as flexible as one would like them to be; rather, they are subject to dimensional constraints. In particular, DWA assumes that people can often not search for objects that have a particular feature but only for objects that stand out from the environment (i.e., that are salient) in a particular feature dimension. We review behavioral and neuroimaging evidence for such dimensional constraints in three areas: search history, voluntary target enhancement, and distractor handling. The first two have been the focus of research on DWA since its inception and the latter the subject of our more recent research. Additionally, we discuss various challenges to the DWA and its relation to other prominent theories on top-down influences in visual search
    corecore