254 research outputs found

    Baseline Shifts do not Predict Attentional Modulation of Target Processing During Feature-Based Visual Attention

    Get PDF
    Cues that direct selective attention to a spatial location have been observed to increase baseline neural activity in visual areas that represent a to-be-attended stimulus location. Analogous attention-related baseline shifts have also been observed in response to attention-directing cues for non-spatial stimulus features. It has been proposed that baseline shifts with preparatory attention may serve as the mechanism by which attention modulates the responses to subsequent visual targets that match the attended location or feature. Using functional MRI, we localized color- and motion-sensitive visual areas in individual subjects and investigated the relationship between cue-induced baseline shifts and the subsequent attentional modulation of task-relevant target stimuli. Although attention-directing cues often led to increased background neural activity in feature specific visual areas, these increases were not correlated with either behavior in the task or subsequent attentional modulation of the visual targets. These findings cast doubt on the hypothesis that attention-related shifts in baseline neural activity result in selective sensory processing of visual targets during feature-based selective attention

    Beliefs about the Minds of Others Influence How We Process Sensory Information

    Get PDF
    Attending where others gaze is one of the most fundamental mechanisms of social cognition. The present study is the first to examine the impact of the attribution of mind to others on gaze-guided attentional orienting and its ERP correlates. Using a paradigm in which attention was guided to a location by the gaze of a centrally presented face, we manipulated participants' beliefs about the gazer: gaze behavior was believed to result either from operations of a mind or from a machine. In Experiment 1, beliefs were manipulated by cue identity (human or robot), while in Experiment 2, cue identity (robot) remained identical across conditions and beliefs were manipulated solely via instruction, which was irrelevant to the task. ERP results and behavior showed that participants' attention was guided by gaze only when gaze was believed to be controlled by a human. Specifically, the P1 was more enhanced for validly, relative to invalidly, cued targets only when participants believed the gaze behavior was the result of a mind, rather than of a machine. This shows that sensory gain control can be influenced by higher-order (task-irrelevant) beliefs about the observed scene. We propose a new interdisciplinary model of social attention, which integrates ideas from cognitive and social neuroscience, as well as philosophy in order to provide a framework for understanding a crucial aspect of how humans' beliefs about the observed scene influence sensory processing

    Express Attentional Re-Engagement but Delayed Entry into Consciousness Following Invalid Spatial Cues in Visual Search

    Get PDF
    Background: In predictive spatial cueing studies, reaction times (RT) are shorter for targets appearing at cued locations (valid trials) than at other locations (invalid trials). An increase in the amplitude of early P1 and/or N1 event-related potential (ERP) components is also present for items appearing at cued locations, reflecting early attentional sensory gain control mechanisms. However, it is still unknown at which stage in the processing stream these early amplitude effects are translated into latency effects. Methodology/Principal Findings: Here, we measured the latency of two ERP components, the N2pc and the sustained posterior contralateral negativity (SPCN), to evaluate whether visual selection (as indexed by the N2pc) and visual-short term memory processes (as indexed by the SPCN) are delayed in invalid trials compared to valid trials. The P1 was larger contralateral to the cued side, indicating that attention was deployed to the cued location prior to the target onset. Despite these early amplitude effects, the N2pc onset latency was unaffected by cue validity, indicating an express, quasiinstantaneous re-engagement of attention in invalid trials. In contrast, latency effects were observed for the SPCN, and these were correlated to the RT effect. Conclusions/Significance: Results show that latency differences that could explain the RT cueing effects must occur after visual selection processes giving rise to the N2pc, but at or before transfer in visual short-term memory, as reflected by th

    Pre-Stimulus Activity Predicts the Winner of Top-Down vs. Bottom-Up Attentional Selection

    Get PDF
    Our ability to process visual information is fundamentally limited. This leads to competition between sensory information that is relevant for top-down goals and sensory information that is perceptually salient, but task-irrelevant. The aim of the present study was to identify, from EEG recordings, pre-stimulus and pre-saccadic neural activity that could predict whether top-down or bottom-up processes would win the competition for attention on a trial-by-trial basis. We employed a visual search paradigm in which a lateralized low contrast target appeared alone, or with a low (i.e., non-salient) or high contrast (i.e., salient) distractor. Trials with a salient distractor were of primary interest due to the strong competition between top-down knowledge and bottom-up attentional capture. Our results demonstrated that 1) in the 1-sec pre-stimulus interval, frontal alpha (8–12 Hz) activity was higher on trials where the salient distractor captured attention and the first saccade (bottom-up win); and 2) there was a transient pre-saccadic increase in posterior-parietal alpha (7–8 Hz) activity on trials where the first saccade went to the target (top-down win). We propose that the high frontal alpha reflects a disengagement of attentional control whereas the transient posterior alpha time-locked to the saccade indicates sensory inhibition of the salient distractor and suppression of bottom-up oculomotor capture

    Enhanced response inhibition during intensive meditation training predicts improvements in self-reported adaptive socioemotional functioning.

    Full text link
    We examined the impact of training-induced improvements in self-regulation, operationalized in terms of response inhibition, on longitudinal changes in self-reported adaptive socioemotional functioning. Data were collected from participants undergoing 3 months of intensive meditation training in an isolated retreat setting (Retreat 1) and a wait-list control group that later underwent identical training (Retreat 2). A 32-min response inhibition task (RIT) was designed to assess sustained self-regulatory control. Adaptive functioning (AF) was operationalized as a single latent factor underlying self-report measures of anxious and avoidant attachment, mindfulness, ego resilience, empathy, the five major personality traits (extroversion, agreeableness, conscientiousness, neuroticism, and openness to experience), diffi-culties in emotion regulation, depression, anxiety, and psychological well-being. Participants in Retreat 1 improved in RIT performance and AF over time whereas the controls did not. The control participants later also improved on both dimensions during their own retreat (Retreat 2). These improved levels of RIT performance and AF were sustained in follow-up assessments conducted approximately 5 months after the training. Longitudinal dynamic models with combined data from both retreats showed that improvement in RIT performance during training influenced the change in AF over time, which is consistent with a key claim in the Buddhist literature that enhanced capacity for self-regulation is an important precursor of changes in emotional well-being

    Distracting the Mind Improves Performance: An ERP Study

    Get PDF
    When a second target (T2) is presented in close succession of a first target (T1), people often fail to identify T2, a phenomenon known as the attentional blink (AB). However, the AB can be reduced substantially when participants are distracted during the task, for instance by a concurrent task, without a cost for T1 performance. The goal of the current study was to investigate the electrophysiological correlates of this paradoxical effect.Participants successively performed three tasks, while EEG was recorded. The first task (standard AB) consisted of identifying two target letters in a sequential stream of distractor digits. The second task (grey dots task) was similar to the first task with the addition of an irrelevant grey dot moving in the periphery, concurrent with the central stimulus stream. The third task (red dot task) was similar to the second task, except that detection of an occasional brief color change in the moving grey dot was required. AB magnitude in the latter task was significantly smaller, whereas behavioral performance in the standard and grey dots tasks did not differ. Using mixed effects models, electrophysiological activity was compared during trials in the grey dots and red dot tasks that differed in task instruction but not in perceptual input. In the red dot task, both target-related parietal brain activity associated with working memory updating (P3) as well as distractor-related occipital activity was significantly reduced.The results support the idea that the AB might (at least partly) arise from an overinvestment of attentional resources or an overexertion of attentional control, which is reduced when a distracting secondary task is carried out. The present findings bring us a step closer in understanding why and how an AB occurs, and how these temporal restrictions in selective attention can be overcome

    Stay Tuned: What Is Special About Not Shifting Attention?

    Get PDF
    Background: When studying attentional orienting processes, brain activity elicited by symbolic cue is usually compared to a neutral condition in which no information is provided about the upcoming target location. It is generally assumed that when a neutral cue is provided, participants do not shift their attention. The present study sought to validate this assumption. We further investigated whether anticipated task demands had an impact on brain activity related to processing symbolic cues. Methodology/Principal Findings: Two experiments were conducted, during which event-related potentials were elicited by symbolic cues that instructed participants to shift their attention to a particular location on a computer screen. In Experiment 1, attention shift-inducing cues were compared to non-informative cues, while in both conditions participants were required to detect target stimuli that were subsequently presented at peripheral locations. In Experiment 2, a non-ambiguous "stay-central'' cue that explicitly required participants not to shift their attention was used instead. In the latter case, target stimuli that followed a stay-central cue were also presented at a central location. Both experiments revealed enlarged early latency contralateral ERP components to shift-inducing cues compared to those elicited by either non-informative (exp. 1) or stay-central cues (exp. 2). In addition, cueing effects were modulated by the anticipated difficulty of the upcoming target, particularly so in Experiment 2. A positive difference, predominantly over the posterior contralateral scalp areas, could be observed for stay-central cues, especially for those predicting that the upcoming target would be easy. This effect was not present for non-informative cues. Conclusions/Significance: We interpret our result in terms of a more rapid engagement of attention occurring in the presence of a more predictive instruction (i.e. stay-central easy target). Our results indicate that the human brain is capable of very rapidly identifying the difference between different types of instructions

    Visual search performance is predicted by both prestimulus and poststimulus electrical brain activity

    Get PDF
    © The Author(s) 2016. An individual's performance on cognitive and perceptual tasks varies considerably across time and circumstances. We investigated neural mechanisms underlying such performance variability using regression-based analyses to examine trial-by-trial relationships between response times (RTs) and different facets of electrical brain activity. Thirteen participants trained five days on a color-popout visual-search task, with EEG recorded on days one and five. The task was to find a color-popout target ellipse in a briefly presented array of ellipses and discriminate its orientation. Later within a session, better preparatory attention (reflected by less prestimulus Alpha-band oscillatory activity) and better poststimulus early visual responses (reflected by larger sensory N1 waves) correlated with faster RTs. However, N1 amplitudes decreased by half throughout each session, suggesting adoption of a more efficient search strategy within a session. Additionally, fast RTs were preceded by earlier and larger lateralized N2pc waves, reflecting faster and stronger attentional orienting to the targets. Finally, SPCN waves associated with target-orientation discrimination were smaller for fast RTs in the first but not the fifth session, suggesting optimization with practice. Collectively, these results delineate variations in visual search processes that change over an experimental session, while also pointing to cortical mechanisms underlying performance in visual search

    Field of Attention for Instantaneous Object Recognition

    Get PDF
    BACKGROUND: Instantaneous object discrimination and categorization are fundamental cognitive capacities performed with the guidance of visual attention. Visual attention enables selection of a salient object within a limited area of the visual field; we referred to as "field of attention" (FA). Though there is some evidence concerning the spatial extent of object recognition, the following questions still remain unknown: (a) how large is the FA for rapid object categorization, (b) how accuracy of attention is distributed over the FA, and (c) how fast complex objects can be categorized when presented against backgrounds formed by natural scenes. METHODOLOGY/PRINCIPAL FINDINGS: To answer these questions, we used a visual perceptual task in which subjects were asked to focus their attention on a point while being required to categorize briefly flashed (20 ms) photographs of natural scenes by indicating whether or not these contained an animal. By measuring the accuracy of categorization at different eccentricities from the fixation point, we were able to determine the spatial extent and the distribution of accuracy over the FA, as well as the speed of categorizing objects using stimulus onset asynchrony (SOA). Our results revealed that subjects are able to rapidly categorize complex natural images within about 0.1 s without eye movement, and showed that the FA for instantaneous image categorization covers a visual field extending 20° × 24°, and accuracy was highest (>90%) at the center of FA and declined with increasing eccentricity. CONCLUSIONS/SIGNIFICANCE: In conclusion, human beings are able to categorize complex natural images at a glance over a large extent of the visual field without eye movement
    corecore