17,452 research outputs found
Spatiotemporal dynamics of feature-based attention spread: evidence from combined electroencephalographic and magnetoencephalographic recordings
Attentional selection on the basis of nonspatial stimulus features induces a sensory gain enhancement by increasing the firing-rate of individual neurons tuned to the attended feature, while responses of neurons tuned to opposite feature-values are suppressed. Here we recorded event-related potentials (ERPs) and magnetic fields (ERMFs) in human observers to investigate the underlying neural correlates of feature-based attention at the population level. During the task subjects attended to a moving transparent surface presented in the left visual field, while task-irrelevant probe stimuli executing brief movements into varying directions were presented in the opposite visual field. ERP and ERMF amplitudes elicited by the unattended task-irrelevant probes were modulated as a function of the similarity between their movement direction and the task-relevant movement direction in the attended visual field. These activity modulations reflecting globally enhanced processing of the attended feature were observed to start not before 200 ms poststimulus and were localized to the motion-sensitive area hMT. The current results indicate that feature-based attention operates in a global manner but needs time to spread and provide strong support for the feature-similarity gain model
Recommended from our members
Analysis of the visual spatiotemporal properties of American Sign Language.
Careful measurements of the temporal dynamics of speech have provided important insights into phonetic properties of spoken languages, which are important for understanding auditory perception. By contrast, analytic quantification of the visual properties of signed languages is still largely uncharted. Exposure to sign language is a unique experience that could shape and modify low-level visual processing for those who use it regularly (i.e., what we refer to as the Enhanced Exposure Hypothesis). The purpose of the current study was to characterize the visual spatiotemporal properties of American Sign Language (ASL) so that future studies can test the enhanced exposure hypothesis in signers, with the prediction that altered vision should be observed within, more so than outside, the range of properties found in ASL. Using an ultrasonic motion tracking system, we recorded the hand position in 3-dimensional space over time during sign language production of signs, sentences, and narratives. From these data, we calculated several metrics: hand position and eccentricity in space and hand motion speed. For individual signs, we also measured total distance travelled by the dominant hand and total duration of each sign. These metrics were found to fall within a selective range, suggesting that exposure to signs is a specific and unique visual experience, which might alter visual perceptual abilities in signers for visual information within the experienced range, even for non-language stimuli
Spatiotemporal adaptation through corticothalamic loops: A hypothesis
The thalamus is the major gate to the cortex and its control over cortical responses is well established. Cortical feedback to the thalamus is, in turn, the anatomically dominant input to relay cells, yet its influence on thalamic processing has been difficult to interpret. For an understanding of complex sensory processing, detailed concepts of the corticothalamic interplay need yet to be established. Drawing on various physiological and anatomical data, we elaborate the novel hypothesis that the visual cortex controls the spatiotemporal structure of cortical receptive fields via feedback to the lateral geniculate nucleus. Furthermore, we present and analyze a model of corticogeniculate loops that implements this control, and exhibit its ability of object segmentation by statistical motion analysis in the visual field
Recommended from our members
Efficient spiking neural network model of pattern motion selectivity in visual cortex
Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction- selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40∈×∈40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available. © 2014 Springer Science+Business Media New York
Finding any Waldo: zero-shot invariant and efficient visual search
Searching for a target object in a cluttered scene constitutes a fundamental
challenge in daily vision. Visual search must be selective enough to
discriminate the target from distractors, invariant to changes in the
appearance of the target, efficient to avoid exhaustive exploration of the
image, and must generalize to locate novel target objects with zero-shot
training. Previous work has focused on searching for perfect matches of a
target after extensive category-specific training. Here we show for the first
time that humans can efficiently and invariantly search for natural objects in
complex scenes. To gain insight into the mechanisms that guide visual search,
we propose a biologically inspired computational model that can locate targets
without exhaustive sampling and generalize to novel objects. The model provides
an approximation to the mechanisms integrating bottom-up and top-down signals
during search in natural scenes.Comment: Number of figures: 6 Number of supplementary figures: 1
- …