804 research outputs found

    Building on a Solid Baseline: Anticipatory Biases in Attention.

    Get PDF
    A brain-imaging paper by Kastner and colleagues in 1999 was the first to demonstrate that merely focusing attention at a spatial location changed the baseline activity level in various regions of human visual cortex even before any stimuli appeared. The study provided a touchstone for investigating cognitive-sensory interactions and understanding the proactive endogenous signals that shape perception

    Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices.

    Get PDF
    Computational theories propose that attention modulates the topographical landscape of spatial 'priority' maps in regions of the visual cortex so that the location of an important object is associated with higher activation levels. Although studies of single-unit recordings have demonstrated attention-related increases in the gain of neural responses and changes in the size of spatial receptive fields, the net effect of these modulations on the topography of region-level priority maps has not been investigated. Here we used functional magnetic resonance imaging and a multivariate encoding model to reconstruct spatial representations of attended and ignored stimuli using activation patterns across entire visual areas. These reconstructed spatial representations reveal the influence of attention on the amplitude and size of stimulus representations within putative priority maps across the visual hierarchy. Our results suggest that attention increases the amplitude of stimulus representations in these spatial maps, particularly in higher visual areas, but does not substantively change their size

    Fluctuations in instantaneous frequency predict alpha amplitude during visual perception.

    Get PDF
    Rhythmic neural activity in the alpha band (8-13 Hz) is thought to have an important role in the selective processing of visual information. Typically, modulations in alpha amplitude and instantaneous frequency are thought to reflect independent mechanisms impacting dissociable aspects of visual information processing. However, in complex systems with interacting oscillators such as the brain, amplitude and frequency are mathematically dependent. Here, we record electroencephalography in human subjects and show that both alpha amplitude and instantaneous frequency predict behavioral performance in the same visual discrimination task. Consistent with a model of coupled oscillators, we show that fluctuations in instantaneous frequency predict alpha amplitude on a single trial basis, empirically demonstrating that these metrics are not independent. This interdependence suggests that changes in amplitude and instantaneous frequency reflect a common change in the excitatory and inhibitory neural activity that regulates alpha oscillations and visual information processing

    The positional-specificity effect reveals a passive-trace contribution to visual short-term memory.

    Get PDF
    The positional-specificity effect refers to enhanced performance in visual short-term memory (VSTM) when the recognition probe is presented at the same location as had been the sample, even though location is irrelevant to the match/nonmatch decision. We investigated the mechanisms underlying this effect with behavioral and fMRI studies of object change-detection performance. To test whether the positional-specificity effect is a direct consequence of active storage in VSTM, we varied memory load, reasoning that it should be observed for all objects presented in a sub-span array of items. The results, however, indicated that although robust with a memory load of 1, the positional-specificity effect was restricted to the second of two sequentially presented sample stimuli in a load-of-2 experiment. An additional behavioral experiment showed that this disruption wasn't due to the increased load per se, because actively processing a second object--in the absence of a storage requirement--also eliminated the effect. These behavioral findings suggest that, during tests of object memory, position-related information is not actively stored in VSTM, but may be retained in a passive tag that marks the most recent site of selection. The fMRI data were consistent with this interpretation, failing to find location-specific bias in sustained delay-period activity, but revealing an enhanced response to recognition probes that matched the location of that trial's sample stimulus

    Exploring the relationship between perceptual learning and top-down attentional control

    Get PDF
    AbstractHere, we review the role of top-down attention in both the acquisition and the expression of perceptual learning, as well as the role of learning in more efficiently guiding attentional modulations. Although attention often mediates learning at the outset of training, many of the characteristic behavioral and neural changes associated with learning can be observed even when stimuli are task irrelevant and ignored. However, depending on task demands, attention can override the effects of perceptual learning, suggesting that even if top-down factors are not strictly necessary to observe learning, they play a critical role in determining how learning-related changes in behavior and neural activity are ultimately expressed. In turn, training may also act to optimize the effectiveness of top-down attentional control by improving the efficiency of sensory gain modulations, regulating intrinsic noise, and altering the read-out of sensory information

    Attention Improves transfer of motion information between V1

    Get PDF
    Selective attention modulates activity within individual visual areas; however, the role of attention in mediating the transfer of information between areas is not well understood. Here, we used fMRI to assess attention-related changes in coupled BOLD activation in two key areas of human visual cortex that are involved in motion processing: V1 and MT. To examine attention-related changes in cross-area coupling, multivoxel patterns in each visual area were decomposed to estimate the trial-by-trial response amplitude in a set of directionselective "channels." In both V1 and MT, BOLD responses increase in direction-selective channels tuned to the attended direction of motion and decrease in channels tuned away from the attended direction. Furthermore, the modulation of cross-area correlations between similarly tuned populations is inversely related to the modulation of their mean responses, an observation that can be explained via a feedforward motion computation in MT and a modulation of local noise correlations in V1. More importantly, these modulations accompany an increase in the cross-area mutual information between direction-selective response patterns in V1 and MT, suggesting that attention improves the transfer of sensory information between cortical areas that cooperate to support perception. Finally, our model suggests that divisive normalization of neural activity in V1 before its integration by MT is critical to cross-area information coupling, both in terms of cross-area correlation as well as cross-area mutual information

    A computer vision model for visual-object-based attention and eye movements

    Get PDF
    This is the post-print version of the final paper published in Computer Vision and Image Understanding. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2008 Elsevier B.V.This paper presents a new computational framework for modelling visual-object-based attention and attention-driven eye movements within an integrated system in a biologically inspired approach. Attention operates at multiple levels of visual selection by space, feature, object and group depending on the nature of targets and visual tasks. Attentional shifts and gaze shifts are constructed upon their common process circuits and control mechanisms but also separated from their different function roles, working together to fulfil flexible visual selection tasks in complicated visual environments. The framework integrates the important aspects of human visual attention and eye movements resulting in sophisticated performance in complicated natural scenes. The proposed approach aims at exploring a useful visual selection system for computer vision, especially for usage in cluttered natural visual environments.National Natural Science of Founda- tion of Chin

    Reading the mind's eye: Decoding category information during mental imagery

    Get PDF
    Category information for visually presented objects can be read out from multi-voxel patterns of fMRI activity in ventral–temporal cortex. What is the nature and reliability of these patterns in the absence of any bottom–up visual input, for example, during visual imagery? Here, we first ask how well category information can be decoded for imagined objects and then compare the representations evoked during imagery and actual viewing. In an fMRI study, four object categories (food, tools, faces, buildings) were either visually presented to subjects, or imagined by them. Using pattern classification techniques, we could reliably decode category information (including for non-special categories, i.e., food and tools) from ventral–temporal cortex in both conditions, but only during actual viewing from retinotopic areas. Interestingly, in temporal cortex when the classifier was trained on the viewed condition and tested on the imagery condition, or vice versa, classification performance was comparable to within the imagery condition. The above results held even when we did not use information in the specialized category-selective areas. Thus, the patterns of representation during imagery and actual viewing are in fact surprisingly similar to each other. Consistent with this observation, the maps of “diagnostic voxels” (i.e., the classifier weights) for the perception and imagery classifiers were more similar in ventral–temporal cortex than in retinotopic cortex. These results suggest that in the absence of any bottom–up input, cortical back projections can selectively re-activate specific patterns of neural activity

    Learned Value Magnifies Salience-Based Attentional Capture

    Get PDF
    Visual attention is captured by physically salient stimuli (termed salience-based attentional capture), and by otherwise task-irrelevant stimuli that contain goal-related features (termed contingent attentional capture). Recently, we reported that physically nonsalient stimuli associated with value through reward learning also capture attention involuntarily (Anderson, Laurent, & Yantis, PNAS, 2011). Although it is known that physical salience and goal-relatedness both influence attentional priority, it is unknown whether or how attentional capture by a salient stimulus is modulated by its associated value. Here we show that a physically salient, task-irrelevant distractor previously associated with a large reward slows visual search more than an equally salient distractor previously associated with a smaller reward. This magnification of salience-based attentional capture by learned value extinguishes over several hundred trials. These findings reveal a broad influence of learned value on involuntary attentional capture

    Electrophysiological Correlates of Learning-Induced Modulation of Visual Motion Processing in Humans

    Get PDF
    Training on a visual task leads to increased perceptual and neural responses to visual features that were attended during training as well as decreased responses to neglected distractor features. However, the time course of these attention-based modulations of neural sensitivity for visual features has not been investigated before. Here we measured event related potentials (ERP) in response to motion stimuli with different coherence levels before and after training on a speed discrimination task requiring object-based attentional selection of one of the two competing motion stimuli. We found that two peaks on the ERP waveform were modulated by the strength of the coherent motion signal; the response amplitude associated with motion directions that were neglected during training was smaller than the response amplitude associated with motion directions that were attended during training. The first peak of motion coherence-dependent modulation of the ERP responses was at 300 ms after stimulus onset and it was most pronounced over the occipitotemporal cortex. The second peak was around 500 ms and was focused over the parietal cortex. A control experiment suggests that the earlier motion coherence-related response modulation reflects the extraction of the coherent motion signal whereas the later peak might index accumulation and readout of motion signals by parietal decision mechanisms. These findings suggest that attention-based learning affects neural responses both at the sensory and decision processing stages
    corecore