8 research outputs found

    Salience-based selection: attentional capture by distractors less salient than the target

    Get PDF
    Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience

    Dissociating the effects of similarity, salience, and top-down processes in search for linearly separable size targets.

    No full text
    In two experiments, we explored the role of foreknowledge on visual search for targets defined along the size continuum. Targets were of large, medium, or small size and of high or low similarity relative to the distractors. In Experiment 1, we compared search for known and unknown singleton feature targets as a function of their size and similarity to the distractors. When distractor similarity was high, target foreknowledge benefited targets at the end of the size continuum (i.e., large and small) relatively more than targets of medium size. In Experiment 2, participants were given foreknowledge of what the target was not The beneficial effect of foreknowledge for endpoint targets was reduced. The data indicate the role of top-down templates in search, which can be "tuned" more effectively for targets at the ends of feature dimensions

    Simply shapely: relative, not absolute shapes are primed in pop-out search

    No full text
    Visual search is typically faster when the target from the previous trial is repeated than when it changes. This priming effect is commonly attributed to a selection bias for the target feature value or against the nontarget feature value that carries over to the next trial. By contrast, according to a relational account, what is primed in visual search is the target-nontarget relationship-namely, the feature that the target has in relation to the features in the nontarget context (e.g., larger, darker, redder)-and switch costs occur only when the target-nontarget relations reverse across trials. Here, the relational account was tested against current feature-based views in three eye movement experiments that used different shape search tasks (e.g., geometrical figures varying in the number of corners). For all tested shapes, reversing the target-nontarget relationships produced switch costs of the same magnitude as directly switching the target and nontarget features across trials ("full-switch"). In particular, changing only the nontargets produced large switch costs, even when the target feature was always repeated across trials. By contrast, no switch costs were observed when both the target and nontarget features changed, such that the coarse target-nontarget relations remained constant across trials. These results support the relational account over feature-based accounts of priming and indicate that a target's shape can be encoded relative to the shapes in the nontarget context

    Contingent capture in cueing: the role of color search templates and cue-target color relations

    No full text
    Visual search studies have shown that attention can be top-down biased to a specific target color, so that only items with this color or a similar color can capture attention. According to some theories of attention, colors from different categories (i.e., red, green, blue, yellow) are represented independently. However, other accounts have proposed that these are related-either because color is filtered through broad overlapping channels (4-channel view), or because colors are represented in one continuous feature space (e.g., CIE space) and search is governed by specific principles (e.g., linear separability between colors, or top-down tuning to relative colors). The present study tested these different views using a cueing experiment in which observers had to select one target color (e.g., red) and ignore two or four differently colored distractors that were presented prior to the target (cues). The results showed clear evidence for top-down contingent capture by colors, as a target-colored cue captured attention more strongly than differently colored cues. However, the results failed to support any of the proposed views that different color categories are related to one another by overlapping channels, linear separability, or relational guidance (N = 96)
    corecore