57 research outputs found

    Cross-modal orienting of exogenous attention results in visual-cortical facilitation, not suppression

    Get PDF
    Attention may be oriented exogenously (i.e., involuntarily) to the location of salient stimuli, resulting in improved perception. However, it is unknown whether exogenous attention improves perception by facilitating processing of attended information, suppressing processing of unattended information, or both. To test this question, we measured behavioral performance and cue-elicited neural changes in the electroencephalogram as participants (N = 19) performed a task in which a spatially non-predictive auditory cue preceded a visual target. Critically, this cue was either presented at a peripheral target location or from the center of the screen, allowing us to isolate spatially specific attentional activity. We find that both behavior and attention-mediated changes in visual-cortical activity are enhanced at the location of a cue prior to the onset of a target, but that behavior and neural activity at an unattended target location is equivalent to that following a central cue that does not direct attention (i.e., baseline). These results suggest that exogenous attention operates via facilitation of information at an attended location

    Feature-based interference from unattended visual field during attentional tracking in younger and older adults

    Get PDF
    The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall

    Ensemble perception of faces within the focus of attention is biased towards unattended and task-irrelevant faces

    No full text

    Target-distractor similarity predicts visual search efficiency but only for highly similar features

    No full text
    A major constraining factor for attentional selection is the similarity between targets and distractors. When similarity is low, target items can be identified quickly and efficiently, while high similarity can incur large costs on processing speed. Models of visual search contrast a fast, efficient parallel stage with a slow serial processing stage where search times are strongly modulated by the number of distractors in the display. In particular, recent work has argued that the magnitude of search slopes should be inversely proportional to target-distractor similarity. Here, we assessed the relationship between target-distractor similarity and search slopes. In our visual search tasks, participants detected an oddball color target among distractors (Experiments 1 & 2) or discriminated the direction of a triangle in the oddball color (Experiment 3). We systematically varied the similarity between target and distractor colors (along a circular CIELab color wheel) and the number of distractors in the search array, finding logarithmic search slopes that were inversely proportional to the number of items in the array. Surprisingly, we also found that searches were highly efficient (i.e., near zero slopes) for targets and distractors that were extremely similar (≤20° in color space). These findings indicate that visual search is systematically influenced by target-distractor similarity across different processing stages. Importantly, we found that search can be highly efficient and entirely unaffected by the number of distractors despite high perceptual similarity, in contrast to the general assumption that high similarity must lead to slow and serial search behavior

    Feature-based attention is not confined by object boundaries: spatially global enhancement of irrelevant features

    No full text
    Theories of visual attention differ in what they define as the core unit of selection. Feature-based theories emphasize the importance of visual features (e.g., color, size, motion), demonstrated through enhancement of attended features across the visual field, while object-based theories propose that attention enhances all features belonging to the same object. Here we test how within-object enhancement of features interacts with spatially global effects of feature-based attention. Participants attended a set of colored dots (moving coherently upwards or downwards) to detect brief luminance decreases, while simultaneously detecting speed changes in another set of dots in the opposite visual field. Participants had higher speed detection rates for the dot array that matched the motion direction of the attended color array, although motion direction was entirely task-irrelevant. This effect persisted even when it was detrimental for task performance. Overall, these results indicate that task-irrelevant object features are enhanced globally, surpassing object boundaries

    Feature similarity is non-linearly related to attentional selection

    No full text

    Efficient tuning of attention to narrow and broad ranges of task-relevant feature values

    No full text
    Accepted manuscript is published in Visual Cognition: https://doi.org/10.1080/13506285.2023.219299

    Representational structures as a unifying framework for attention

    No full text
    Our visual system consciously processes only a subset of the incoming information. Selective attention allows us to prioritize relevant inputs, and can be allocated to features, locations, and objects. Recent advances in feature-based attention suggest that several selection principles are shared across these domains and that many differences between the effects of attention on perceptual processing can be explained by differences in the underlying representational structures. Moving forward, it can thus be useful to also assess how attention changes the structure of the representational spaces over which it operates, which include the spatial organization, feature maps, and object-based coding in visual cortex. This will ultimately add to our understanding of how attention changes the flow of visual information processing more broadly

    The role of meaning in visual working memory: Real-world objects, but not simple features, benefit from deeper processing

    No full text
    Visual working memory is a capacity-limited cognitive system used to actively store and manipulate visual information. Visual working memory capacity is not fixed, but varies by stimulus type: stimuli that are more meaningful are better remembered. In the current work, we investigate what conditions lead to the strongest benefits for meaningful stimuli. We propose that in some situations, participants may be prone to try to encode the entire display holistically (i.e., in a quick ‘snapshot’), encouraging participants to treat objects simply as meaningless colored ‘blobs’, rather than processing them individually and in a high-level way, which could reduce benefits for meaningful stimuli. In a series of experiments we directly test whether real-world objects, colors, perceptually-matched less-meaningful objects, and fully scrambled objects benefit from deeper processing. We systematically vary the presentation format of stimuli at encoding to be either simultaneous — encouraging a parallel, ‘take-a-quick-snapshot’ strategy — or present the stimuli sequentially, promoting a serial, each-item-at-once strategy. We find large advantages for meaningful objects in all conditions, but find that real-world objects — and to a lesser degree lightly scrambled, still meaningful versions of the objects — benefit from the sequential encoding and thus deeper, focused-on-individual-items processing, while colors do not. Our results suggest single feature objects may be an outlier in their affordance of parallel, quick processing, and that in more realistic memory situations, visual working memory likely relies upon representations resulting from in-depth processing of objects (e.g., in higher-level visual areas) rather than solely being represented in terms of their low-level features
    • …
    corecore