2,130 research outputs found
Activation of new attentional templates for real-world objects in visual search
Visual search is controlled by representations of target objects (attentional templates). Such templates are often activated in response to verbal descriptions of search targets, but it is unclear whether search can be guided effectively by such verbal cues. We measured ERPs to track the activation of attentional templates for new target objects defined by word cues. On each trial run, a word cue was followed by three search displays that contained the cued target object among three distractors. Targets were detected more slowly in the first display of each trial run, and the N2pc component (an ERP marker of attentional target selection) was attenuated and delayed for the first relative to the two successive presentations of a particular target object, demonstrating limitations in the ability of word cues to activate effective attentional templates. N2pc components to target objects in the first display were strongly affected by differences in object imageability (i.e., the ability of word cues to activate a target-matching visual representation). These differences were no longer present for the second presentation of the same target objects, indicating that a single perceptual encounter is sufficient to activate a precise attentional template. Our results demonstrate the superiority of visual over verbal target specifications in the control of visual search, highlight the fact that verbal descriptions are more effective for some objects than others, and suggest that the attentional templates that guide search for particular real-world target objects are analog visual representations
The role of color in search templates for real-world target objects
During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the event-related potential (ERP) as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., “apple”) was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, while selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster and target N2pc components emerged earlier for the 2nd and 3rd display of each trial run relative to the 1st display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the 2nd and 3rd display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of non-colored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known, but seems to be less important when search targets are specified by word cues
The neural basis of attentional control in visual search
How do we localise and identify target objects among distractors in visual scenes? The role of selective attention in visual search has been studied for decades and the outlines of a general processing model are now beginning to emerge. Attentional processes unfold in real time and this review describes four temporally and functionally dissociable stages of attention in visual search (preparation, guidance, selection, and identification). Insights from neuroscientific studies of visual attention suggest that our ability to find target objects in visual search is based on processes that operate at each of these four stages, in close association with working memory and recurrent feedback mechanisms
Object-based target templates guide attention during visual search
During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms post-stimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time
Constructing the Search Template: Episodic and Semantic Influences on Categorical Template Formation
Search efficiency is usually improved by presenting observers with highly detailed target cues (e.g., pictures). However, in the absence of accurate target cues, observers must rely only on categorical information to find targets. Models of visual search suggest that guidance in a categorical search results from matching categorically-diagnostic target features in the search display to a top-down attentional set (i.e., the search template), but the mechanisms by which such attentional set is constructed have not been specified. The present investigation examined the influences of both semantic and episodic memory on search template formation. More precisely, the present study tested whether observers incorporated a recent experience with a target-category exemplar into their search template, instead of relying on long-term learned regularities about object categories (Experiment 1) or on the semantic context of the search display (Experiment 2). In both experiments participants completed a categorical search task (75% of trials) in conjunction with a dot-probe response task (25% of trials). The dot-probe response task assessed the contents of the search template by capturing spatial attention if the dot-probe was presented at an inconsistent location relative to objects matching the search template. In Experiment 1 it was shown that observers include recently encoded objects into their search templates, when given the opportunity to do so. Experiment 2, however, showed that observers rely on context semantics to construct categorical search templates, and they continue to do so in the presence of repeated target cues related to different contexts. These results suggest that observers can, and will, rely on episodic representations to construct categorical search templates when such representations are available, but only if no external cues (i.e., scene semantics) are present to identify criterial target feature
The guidance of spatial attention during visual search for colour combinations and colour configurations
Representations of target-defining features (attentional templates) guide the selection of target objects in visual search. We used behavioural and electrophysiological measures to investigate how such search templates control the allocation of attention in search tasks where targets are defined by the combination of two colours or by a specific spatial configuration of these colours. Target displays were preceded by spatially uninformative cue displays that contained items in one or both target-defining colours. Experiments 1 and 2 demonstrated that, during search for colour combinations, attention is initially allocated independently and in parallel to all objects with target-matching colours, but is then rapidly withdrawn from objects that only have one of the two target colours. In Experiment 3, targets were defined by a particular spatial configuration of two colours, and could be accompanied by nontarget objects with a different configuration of the same colours. Attentional guidance processes were unable to distinguish between these two types of objects. Both attracted attention equally when they appeared in a cue display, and both received parallel focal-attentional processing and were encoded into working memory when they were presented in the same target display. Results demonstrate that attention can be guided simultaneously by multiple features from the same dimension, but that these guidance processes have no access to the spatial-configural properties of target objects. They suggest that attentional templates do not represent target objects in an integrated pictorial fashion, but contain separate representations of target-defining features
Recommended from our members
I expect, therefore I see: individual differences in visual awareness
Predictive processing theories posit that awareness of the visual world emerges as the brain engages in predictive inference about the causes of its sensory input. At each level of the processing hierarchy top-down predictions are corrected by bottom-up sensory prediction error to form behaviourally optimal inferences about the state of the visual world. Research suggests there may be individual differences in predictive processing mechanisms such that some individuals are more reliant on prior knowledge, whereas others assign more weight to sensory evidence. Predictive processing biases are thought to manifest in a range of typical and atypical perceptual experiences including proneness to perceptual illusions, sensory sensitivity in autism, and hallucinations in psychosis. The overarching aim of this thesis was to investigate whether in the general population predictive processing biases predict individual differences in visual awareness. Change blindness was selected as the central paradigm of investigation, as it can be conceptualised as a failure to incorporate a novel change into the current prediction about the state of the visual world.
The empirical work in Chapter 2 aimed to characterise individual differences in visual change detection using naturalistic scenes and to identify the perceptual and cognitive measures that predict noticing ability. There were reliable individual differences in change detection that generalised to ecologically valid displays. The ability to notice visual changes was predicted by the strength and stability of perceptual predictions, as measured by the accuracy of visual short-term memory and attentional control in the face of distractors.
In Chapter 3 I used voxel-based-morphometry to investigate whether inter-individual variability in brain structure predicts individual differences in visual awareness. The latter was assessed by the change blindness task as well as its strongest predictor measures (visual short-term memory, attentional capture, and perceptual rivalry). Regions of interest (ROIs) were selected in the parietal and visual cortices based on previous evidence that these areas are causally involved in the awareness of visual stimuli. This study aimed to discover whether the average grey matter density in the ROIs predict susceptibility to CB. The ROI-based analyses revealed the average grey matter density in left posterior parietal cortex predicted visual short-term memory accuracy but none of the other hypothesised relationships were significant.
Chapter 4 aimed to measure individual differences in the reliance on prior knowledge by employing the Mooney face detection task. In this task participants disambiguated faces in two-tone degraded images before and after the presentation of the original versions of the images. Better change detection was predicted by Mooney face detection without any prior knowledge of the images, a measure of ‘perceptual closure’ or an ability to generate a gestalt of a scene. The attention to detail subscale of the autism spectrum also predicted superior change detection. Reliance on prior knowledge in visual perception (assessed by improvement in Mooney face detection after seeing original images) did not consistently predict atypical perceptual experiences associated with the autism spectrum or schizotypy.
Chapter 5 was an investigation into, firstly, whether there is a general predictive processing bias, which manifests across different methods of inducing prior knowledge, or whether such a bias is paradigm-specific and, secondly, whether reliance on priors predicts perceptual experiences and traits. All prior manipulations in this study lead to an increased tendency to see the expected stimulus in a binocular rivalry display, except adaptation, which lead to a suppression of visual awareness. Attentional control, perceptual priming, expectancy, and imagery loaded onto a common factor, suggesting that the strength of selective attention is closely linked with the facilitatory effect of expectation. The strength of adaptation predicted superior change detection and perceptual priming predicted the propensity to experience perceptual illusions.
Taken together, these findings suggest that there are reliable individual differences in visual change detection, and these are predicted by the strength of visual short-term memory representations, attentional control, perceptual closure ability, as well as the strength of low-level adaptation. Possessing expectations facilitates the entry of the corresponding percept into awareness, irrespective of the method of prior induction. The facilitatory effect that priors exert on visual awareness across different methods is closely linked with the ability to exert attentional control. This suggests that the effects of expectations on awareness may be attentional. However, predictive processing biases were method-specific in that a facilitatory effect using one prior induction method will not necessarily predict the magnitude of the effect using a different method. Some prior effects (e.g., perceptual priming, imagery, and adaptation) yielded correlations with perceptual experiences and traits in the general population. As the research in this thesis is correlational, future studies will need to delineate the effects of expectation, attention, and adaptation on visual awareness and explore the neural representations of these mechanisms
Recommended from our members
Representational dynamics across multiple timescales in human cortical networks
Human cognition occurs at multiple timescales, including immediate processing of the ongoing experiences and slowly drifting higher-level thoughts. To understand how the brain selects and represents these various types of information to guide behavior, this thesis examined representational content within sensory regions, multiple demand (MD) network, and default mode network (DMN). Chapter 1 provides a background review of the current literature. It begins by reviewing experimental investigations of component visual processes that unfold over time. Next, the MD network is introduced as a collection of frontal and parietal regions involved in implementing cognitive control by assembling the required operations for task-relevant behavior. Finally, the DMN is introduced in the context of temporal processing hierarchies, with focus on its representation of situation models summarizing interactions among entities and the environment. The first experiment, presented in Chapter 2, used EEG/MEG to track multiple component processes of selective attention. Five distinct processing operations with different time-courses were quantified, including representation of visual display properties, target location, target identity, behavioral significance, and finally, possible reactivation of the attentional template. Chapter 3 used fMRI to examine neural representations of task episodes, which are temporally organized sequences of steps that occur within a given context. It was found that MD and visual regions showed sensitivity to the fine structure of the contents within a task. DMN regions showed gradual change throughout the entire task, with increased activation at the offset of the entire episode. Chapter 4 analyzed activation profiles of DMN regions using six diverse tasks to examine their functional convergence during social, episodic, and self-referential thought. Results supported proposals of separate subsystems, yet also suggest integration within the DMN. The final chapter, Chapter 5, provides an extended discussion of theoretical concepts related to the three experiments and proposes possible avenues for further research
The role of multisensory integration in the bottom-up and top-down control of attentional object selection
Selective spatial attention and multisensory integration have been traditionally considered as
separate domains in psychology and cognitive neuroscience. However, theoretical and
methodological advancements in the last two decades have paved the way for studying
different types of interactions between spatial attention and multisensory integration. In the
present thesis, two types of such interactions are investigated.
In the first part of the thesis, the role of audiovisual synchrony as a source of
bottom-up bias in visual selection was investigated. In six out of seven experiments, a
variant of the spatial cueing paradigm was used to compare attentional capture by visual and
audiovisual distractors. In another experiment, single-frame search arrays were presented to
investigate whether multisensory integration can bias spatial selection via salience-based
mechanisms. Behavioural and electrophysiological results demonstrated that the ability of
visual objects to capture attention was enhanced when they were accompanied by noninformative
auditory signals. They also showed evidence for the bottom-up nature of these
audiovisual enhancements of attentional capture by revealing that these enhancements
occurred irrespective of the task-relevance of visual objects.
In the second part of this thesis, four experiments are reported that investigated the
spatial selection of audiovisual relative to visual objects and the guidance of their selection
by bimodal object templates. Behavioural and ERP results demonstrated that the ability of
task-irrelevant target-matching visual objects to capture attention was reduced during search
for audiovisual as compared to purely visual targets, suggesting that bimodal search is
guided by integrated audiovisual templates. However, the observation that unimodal targetmatching
visual events retained some ability to capture attention indicates that bimodal
search is controlled to some extent by modality-specific representations of task-relevant
information.
In summary, the present thesis has contributed to our knowledge of how attention is
controlled in real-life environments by demonstrating that spatial selective attention can be
biased towards bimodal objects via salience-driven as well as goal-based mechanisms
- …