2,130 research outputs found

    Activation of new attentional templates for real-world objects in visual search

    Get PDF
    Visual search is controlled by representations of target objects (attentional templates). Such templates are often activated in response to verbal descriptions of search targets, but it is unclear whether search can be guided effectively by such verbal cues. We measured ERPs to track the activation of attentional templates for new target objects defined by word cues. On each trial run, a word cue was followed by three search displays that contained the cued target object among three distractors. Targets were detected more slowly in the first display of each trial run, and the N2pc component (an ERP marker of attentional target selection) was attenuated and delayed for the first relative to the two successive presentations of a particular target object, demonstrating limitations in the ability of word cues to activate effective attentional templates. N2pc components to target objects in the first display were strongly affected by differences in object imageability (i.e., the ability of word cues to activate a target-matching visual representation). These differences were no longer present for the second presentation of the same target objects, indicating that a single perceptual encounter is sufficient to activate a precise attentional template. Our results demonstrate the superiority of visual over verbal target specifications in the control of visual search, highlight the fact that verbal descriptions are more effective for some objects than others, and suggest that the attentional templates that guide search for particular real-world target objects are analog visual representations

    The role of color in search templates for real-world target objects

    Get PDF
    During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the event-related potential (ERP) as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., “apple”) was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, while selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster and target N2pc components emerged earlier for the 2nd and 3rd display of each trial run relative to the 1st display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the 2nd and 3rd display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of non-colored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known, but seems to be less important when search targets are specified by word cues

    The neural basis of attentional control in visual search

    Get PDF
    How do we localise and identify target objects among distractors in visual scenes? The role of selective attention in visual search has been studied for decades and the outlines of a general processing model are now beginning to emerge. Attentional processes unfold in real time and this review describes four temporally and functionally dissociable stages of attention in visual search (preparation, guidance, selection, and identification). Insights from neuroscientific studies of visual attention suggest that our ability to find target objects in visual search is based on processes that operate at each of these four stages, in close association with working memory and recurrent feedback mechanisms

    Object-based target templates guide attention during visual search

    Get PDF
    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms post-stimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time

    Constructing the Search Template: Episodic and Semantic Influences on Categorical Template Formation

    Get PDF
    Search efficiency is usually improved by presenting observers with highly detailed target cues (e.g., pictures). However, in the absence of accurate target cues, observers must rely only on categorical information to find targets. Models of visual search suggest that guidance in a categorical search results from matching categorically-diagnostic target features in the search display to a top-down attentional set (i.e., the search template), but the mechanisms by which such attentional set is constructed have not been specified. The present investigation examined the influences of both semantic and episodic memory on search template formation. More precisely, the present study tested whether observers incorporated a recent experience with a target-category exemplar into their search template, instead of relying on long-term learned regularities about object categories (Experiment 1) or on the semantic context of the search display (Experiment 2). In both experiments participants completed a categorical search task (75% of trials) in conjunction with a dot-probe response task (25% of trials). The dot-probe response task assessed the contents of the search template by capturing spatial attention if the dot-probe was presented at an inconsistent location relative to objects matching the search template. In Experiment 1 it was shown that observers include recently encoded objects into their search templates, when given the opportunity to do so. Experiment 2, however, showed that observers rely on context semantics to construct categorical search templates, and they continue to do so in the presence of repeated target cues related to different contexts. These results suggest that observers can, and will, rely on episodic representations to construct categorical search templates when such representations are available, but only if no external cues (i.e., scene semantics) are present to identify criterial target feature

    The guidance of spatial attention during visual search for colour combinations and colour configurations

    Get PDF
    Representations of target-defining features (attentional templates) guide the selection of target objects in visual search. We used behavioural and electrophysiological measures to investigate how such search templates control the allocation of attention in search tasks where targets are defined by the combination of two colours or by a specific spatial configuration of these colours. Target displays were preceded by spatially uninformative cue displays that contained items in one or both target-defining colours. Experiments 1 and 2 demonstrated that, during search for colour combinations, attention is initially allocated independently and in parallel to all objects with target-matching colours, but is then rapidly withdrawn from objects that only have one of the two target colours. In Experiment 3, targets were defined by a particular spatial configuration of two colours, and could be accompanied by nontarget objects with a different configuration of the same colours. Attentional guidance processes were unable to distinguish between these two types of objects. Both attracted attention equally when they appeared in a cue display, and both received parallel focal-attentional processing and were encoded into working memory when they were presented in the same target display. Results demonstrate that attention can be guided simultaneously by multiple features from the same dimension, but that these guidance processes have no access to the spatial-configural properties of target objects. They suggest that attentional templates do not represent target objects in an integrated pictorial fashion, but contain separate representations of target-defining features

    The role of multisensory integration in the bottom-up and top-down control of attentional object selection

    Get PDF
    Selective spatial attention and multisensory integration have been traditionally considered as separate domains in psychology and cognitive neuroscience. However, theoretical and methodological advancements in the last two decades have paved the way for studying different types of interactions between spatial attention and multisensory integration. In the present thesis, two types of such interactions are investigated. In the first part of the thesis, the role of audiovisual synchrony as a source of bottom-up bias in visual selection was investigated. In six out of seven experiments, a variant of the spatial cueing paradigm was used to compare attentional capture by visual and audiovisual distractors. In another experiment, single-frame search arrays were presented to investigate whether multisensory integration can bias spatial selection via salience-based mechanisms. Behavioural and electrophysiological results demonstrated that the ability of visual objects to capture attention was enhanced when they were accompanied by noninformative auditory signals. They also showed evidence for the bottom-up nature of these audiovisual enhancements of attentional capture by revealing that these enhancements occurred irrespective of the task-relevance of visual objects. In the second part of this thesis, four experiments are reported that investigated the spatial selection of audiovisual relative to visual objects and the guidance of their selection by bimodal object templates. Behavioural and ERP results demonstrated that the ability of task-irrelevant target-matching visual objects to capture attention was reduced during search for audiovisual as compared to purely visual targets, suggesting that bimodal search is guided by integrated audiovisual templates. However, the observation that unimodal targetmatching visual events retained some ability to capture attention indicates that bimodal search is controlled to some extent by modality-specific representations of task-relevant information. In summary, the present thesis has contributed to our knowledge of how attention is controlled in real-life environments by demonstrating that spatial selective attention can be biased towards bimodal objects via salience-driven as well as goal-based mechanisms
    corecore