1,804 research outputs found

    A computational approach to the covert and overt deployment of spatial attention

    Get PDF
    Popular computational models of visual attention tend to neglect the influence of saccadic eye movements whereas it has been shown that the primates perform on average three of them per seconds and that the neural substrate for the deployment of attention and the execution of an eye movement might considerably overlap. Here we propose a computational model in which the deployment of attention with or without a subsequent eye movement emerges from local, distributed and numerical computations

    The cost of space independence in P300-BCI spellers.

    Get PDF
    Background: Though non-invasive EEG-based Brain Computer Interfaces (BCI) have been researched extensively over the last two decades, most designs require control of spatial attention and/or gaze on the part of the user. Methods: In healthy adults, we compared the offline performance of a space-independent P300-based BCI for spelling words using Rapid Serial Visual Presentation (RSVP), to the well-known space-dependent Matrix P300 speller. Results: EEG classifiability with the RSVP speller was as good as with the Matrix speller. While the Matrix speller’s performance was significantly reliant on early, gaze-dependent Visual Evoked Potentials (VEPs), the RSVP speller depended only on the space-independent P300b. However, there was a cost to true spatial independence: the RSVP speller was less efficient in terms of spelling speed. Conclusions: The advantage of space independence in the RSVP speller was concomitant with a marked reduction in spelling efficiency. Nevertheless, with key improvements to the RSVP design, truly space-independent BCIs could approach efficiencies on par with the Matrix speller. With sufficiently high letter spelling rates fused with predictive language modelling, they would be viable for potential applications with patients unable to direct overt visual gaze or covert attentional focus

    Attention bias dynamics and symptom severity during and following CBT for social anxiety disorder

    Get PDF
    Objective: Threat-related attention bias figures prominently in contemporary accounts of the maintenance of anxiety disorders, yet longitudinal intervention research relating attention bias to anxiety symptom severity is limited. Capitalizing on recent advances in the conceptualization and measurement of attention bias, we aimed to examine the relation between attention bias, indexed using trial-level bias scores (TLBSs) to quantify temporal dynamics reflecting dysregulation of attentional processing of threat (as opposed to aggregated mean bias scores) and social anxiety symptom severity over the course of cognitive-behavioral therapy (CBT) and 1-month follow-up. Method: Adults with social anxiety disorder (N = 39) assigned to either yohimbine-or placebo-augmented CBT completed measures of attention bias and social anxiety symptom severity weekly throughout CBT (5 sessions) and at 1-week and 1-month posttreatment. Results: TLBSs of attention bias temporal dynamics showed stronger psychometric properties than mean aggregated scores and were highly interrelated, in line with within-subject temporal variability fluctuating in time between attentional overengagement and strategic avoidance from threat. Attention bias toward threat and temporal variability in attention bias (i.e., attentional dysregulation), but not attention bias away from threat, significantly reduced over the course of CBT. Cross-lag analyses revealed no evidence of a causal relation between reductions in attentional dysregulation leading to symptom severity reduction, or vice versa. Observed relations did not vary as a function of time. Conclusions: We found no evidence for attentional dysregulation as a causal mechanism for symptom reduction in CBT for social anxiety disorders. Implications for future research are discussed

    Overt orienting of spatial attention and corticospinal excitability during action observation are unrelated

    Get PDF
    Observing moving body parts can automatically activate topographically corresponding motor representations in the primary motor cortex (M1), the so-called direct matching. Novel neurophysiological findings from social contexts are nonetheless proving that this process is not automatic as previously thought. The motor system can flexibly shift from imitative to incongruent motor preparation, when requested by a social gesture. In the present study we aim to bring an increase in the literature by assessing whether and how diverting overt spatial attention might affect motor preparation in contexts requiring interactive responses from the onlooker. Experiment 1 shows that overt attention-although anchored to an observed biological movement-can be captured by a target object as soon as a social request for it becomes evident. Experiment 2 reveals that the appearance of a short-lasting red dot in the contralateral space can divert attention from the target, but not from the biological movement. Nevertheless, transcranial magnetic stimulation (TMS) over M1 combined with electromyography (EMG) recordings (Experiment 3) indicates that attentional interference reduces corticospinal excitability related to the observed movement, but not motor preparation for a complementary action on the target. This work provides evidence that social motor preparation is impermeable to attentional interference and that a double dissociation is present between overt orienting of spatial attention and neurophysiological markers of action observation

    Probabilistic modeling of eye movement data during conjunction search via feature-based attention

    Get PDF
    Where the eyes fixate during search is not random; rather, gaze reflects the combination of information about the target and the visual input. It is not clear, however, what information about a target is used to bias the underlying neuronal responses. We here engage subjects in a variety of simple conjunction search tasks while tracking their eye movements. We derive a generative model that reproduces these eye movements and calculate the conditional probabilities that observers fixate, given the target, on or near an item in the display sharing a specific feature with the target. We use these probabilities to infer which features were biased by top-down attention: Color seems to be the dominant stimulus dimension for guiding search, followed by object size, and lastly orientation. We use the number of fixations it took to find the target as a measure of task difficulty. We find that only a model that biases multiple feature dimensions in a hierarchical manner can account for the data. Contrary to common assumptions, memory plays almost no role in search performance. Our model can be fit to average data of multiple subjects or to individual subjects. Small variations of a few key parameters account well for the intersubject differences. The model is compatible with neurophysiological findings of V4 and frontal eye fields (FEF) neurons and predicts the gain modulation of these cells

    Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli

    Get PDF
    In natural vision both stimulus features and task-demands affect an observer's attention. However, the relationship between sensory-driven (“bottom-up”) and task-dependent (“top-down”) factors remains controversial: Can task-demands counteract strong sensory signals fully, quickly, and irrespective of bottom-up features? To measure attention under naturalistic conditions, we recorded eye-movements in human observers, while they viewed photographs of outdoor scenes. In the first experiment, smooth modulations of contrast biased the stimuli's sensory-driven saliency towards one side. In free-viewing, observers' eye-positions were immediately biased toward the high-contrast, i.e., high-saliency, side. However, this sensory-driven bias disappeared entirely when observers searched for a bull's-eye target embedded with equal probability to either side of the stimulus. When the target always occurred in the low-contrast side, observers' eye-positions were immediately biased towards this low-saliency side, i.e., the sensory-driven bias reversed. Hence, task-demands do not only override sensory-driven saliency but also actively countermand it. In a second experiment, a 5-Hz flicker replaced the contrast gradient. Whereas the bias was less persistent in free viewing, the overriding and reversal took longer to deploy. Hence, insufficient sensory-driven saliency cannot account for the bias reversal. In a third experiment, subjects searched for a spot of locally increased contrast (“oddity”) instead of the bull's-eye (“template”). In contrast to the other conditions, a slight sensory-driven free-viewing bias prevails in this condition. In a fourth experiment, we demonstrate that at known locations template targets are detected faster than oddity targets, suggesting that the former induce a stronger top-down drive when used as search targets. Taken together, task-demands can override sensory-driven saliency in complex visual stimuli almost immediately, and the extent of overriding depends on the search target and the overridden feature, but not on the latter's free-viewing saliency

    A dynamic neural field approach to the covert and overt deployment of spatial attention

    Get PDF
    International audienceAbstract The visual exploration of a scene involves the in- terplay of several competing processes (for example to se- lect the next saccade or to keep fixation) and the integration of bottom-up (e.g. contrast) and top-down information (the target of a visual search task). Identifying the neural mech- anisms involved in these processes and in the integration of these information remains a challenging question. Visual attention refers to all these processes, both when the eyes remain fixed (covert attention) and when they are moving (overt attention). Popular computational models of visual attention consider that the visual information remains fixed when attention is deployed while the primates are executing around three saccadic eye movements per second, changing abruptly this information. We present in this paper a model relying on neural fields, a paradigm for distributed, asyn- chronous and numerical computations and show that covert and overt attention can emerge from such a substratum. We identify and propose a possible interaction of four elemen- tary mechanisms for selecting the next locus of attention, memorizing the previously attended locations, anticipating the consequences of eye movements and integrating bottom- up and top-down information in order to perform a visual search task with saccadic eye movements

    A bottom–up model of spatial attention predicts human error patterns in rapid scene recognition

    Get PDF
    Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom–up visual processing (attentional selection and/or recognition) or top–down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of “surprise” in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences

    The time course of exogenous and endogenous control of covert attention

    Get PDF
    Studies of eye-movements and manual response have established that rapid overt selection is largely exogenously driven toward salient stimuli, whereas slower selection is largely endogenously driven to relevant objects. We use the N2pc, an event-related potential index of covert attention, to demonstrate that this time course reflects an underlying pattern in the deployment of covert attention. We find that shifts of attention that occur soon after the onset of a visual search array are directed toward salient, task-irrelevant visual stimuli and are associated with slow responses to the target. In contrast, slower shifts are target-directed and are associated with fast responses. The time course of exogenous and endogenous control provides a framework in which some inconsistent results in the capture literature might be reconciled; capture may occur when attention is rapidly deployed
    corecore