40 research outputs found
Modeling the neural circuitry underlying the behavioral and EEG correlates of attentional capture
The Reactive-Convergent Gradient Field model (R-CGF) is a unique approach to modeling spatial attention in that it links neural mechanisms to event related potentials (ERPs) from scalp EEG. This model was developed with the aim of explaining different, sometimes conflicting, findings in the attention literature. Specifically, this model address conflicting findings showing both simultaneous and serial deployment of attention. Another argument addressed by the model is whether attention to a location invokes a suppression of the spatial surround, or the selective inhibition of distractors. With the R-CGF, we have found that these results are not as incompatible as they appear but rather can both result from a common set of mechanisms in different kinds of experiments.
The model has three main neural sheets, early vision (EV), late vision (LV) and a master attention map (AM), connected spatiotopically. The LV layers are specialized for different features (e.g. shape or color) with modulated connections to the AM depending on task requirements. The AM implements a reactive inhibitory circuit through gating neurons that suppresses attention selectively at the location of distractors that are proximal to the target
Understanding visual attention with RAGNAROC: A Reflexive Attention Gradient through Neural AttRactOr Competition
A quintessential challenge for any perceptual system is the need to focus on task-relevant information without being blindsided by unexpected, yet important information. The human visual system incorporates several solutions to this challenge, one of which is a reflexive covert attention system that is rapidly responsive to both the physical salience and the task-relevance of new information. This paper presents a model that simulates behavioral and neural correlates of reflexive attention as the product of brief neural attractor states that are formed across the visual hierarchy when attention is engaged. Such attractors emerge from an attentional gradient distributed over a population of topographically organized neurons and serve to focus processing at one or more locations in the visual field, while inhibiting the processing of lower priority information. The model moves towards a resolution of key debates about the nature of reflexive attention, such as whether it is parallel or serial, and whether suppression effects are distributed in a spatial surround, or selectively at the location of distractors. Most importantly, the model develops a framework for understanding the neural mechanisms of visual attention as a spatiotopic decision process within a hierarchy and links them to observable correlates such as accuracy, reaction time, and the N2pc and PD components of the EEG. This last contribution is the most crucial for repairing the disconnect that exists between our understanding of behavioral and neural correlates of attention
I tried a bunch of things: The dangers of unexpected overfitting in classification of brain data
Machine learning has enhanced the abilities of neuroscientists to interpret information collected through EEG, fMRI, and MEG data. With these powerful techniques comes the danger of overfitting of hyperparameters which can render results invalid. We refer to this problem as ‘overhyping’ and show that it is pernicious despite commonly used precautions. Overhyping occurs when analysis decisions are made after observing analysis outcomes and can produce results that are partially or even completely spurious. It is commonly assumed that cross-validation is an effective protection against overfitting or overhyping, but this is not actually true. In this article, we show that spurious results can be obtained on random data by modifying hyperparameters in seemingly innocuous ways, despite the use of cross-validation. We recommend a number of techniques for limiting overhyping, such as lock boxes, blind analyses, pre-registrations, and nested cross-validation. These techniques, are common in other fields that use machine learning, including computer science and physics. Adopting similar safeguards is critical for ensuring the robustness of machine-learning techniques in the neurosciences
Dissociating between the N2pc and attentional shifting: an attentional blink study
The N2pc is routinely used as an electrophysiological index of attentional shifting. Its absence is thus taken as evidence that no shift of attention occurred. We provide evidence in contrast to this notion using a variant of the attentional blink (AB) paradigm. Two target letters, embedded in two streams of distractor letters and defined by their color, were separated by either 300 or 800 ms. The second target was preceded by a distractor frame of the same color (cue). As expected, identification of the second target was poorer at the short than at the long lag (the AB effect). The AB did not affect attentional capture by the cue, but suppressed and delayed the N2pc associated with it. This result suggests that the N2pc does not reflect attentional shifting. Instead, we conclude that the N2pc indexes the transient enhancement that occurs at the spatial focus of attention and promotes high-level processing such as identification. This conclusion calls for a reinterpretation of findings from the attentional capture literature that relied on the N2pc as an index of attentional shifting. Our results also inform contemporary models of the AB