122 research outputs found

    Attention to the Color of a Moving Stimulus Modulates Motion-Signal Processing in Macaque Area MT: Evidence for a Unified Attentional System

    Get PDF
    Directing visual attention to spatial locations or to non-spatial stimulus features can strongly modulate responses of individual cortical sensory neurons. Effects of attention typically vary in magnitude, not only between visual cortical areas but also between individual neurons from the same area. Here, we investigate whether the size of attentional effects depends on the match between the tuning properties of the recorded neuron and the perceptual task at hand. We recorded extracellular responses from individual direction-selective neurons in the middle temporal area (MT) of rhesus monkeys trained to attend either to the color or the motion signal of a moving stimulus. We found that effects of spatial and feature-based attention in MT, which are typically observed in tasks allocating attention to motion, were very similar even when attention was directed to the color of the stimulus. We conclude that attentional modulation can occur in extrastriate cortex, even under conditions without a match between the tuning properties of the recorded neuron and the perceptual task at hand. Our data are consistent with theories of object-based attention describing a transfer of attention from relevant to irrelevant features, within the attended object and across the visual field. These results argue for a unified attentional system that modulates responses to a stimulus across cortical areas, even if a given area is specialized for processing task-irrelevant aspects of that stimulus

    Cognitive Control Over Visual Motion Processing – Are Children With ADHD Especially Compromised? A Pilot Study of Flanker Task Event-Related Potentials

    Get PDF
    Performance deficits and diminished brain activity during cognitive control and error processing are frequently reported in attention deficit/hyperactivity disorder (ADHD), indicating a “top-down” deficit in executive attention. So far, these findings are almost exclusively based on the processing of static visual forms, neglecting the importance of visual motion processing in everyday life as well as important attentional and neuroanatomical differences between processing static forms and visual motion. For the current study, we contrasted performance and electrophysiological parameters associated with cognitive control from two Flanker-Tasks using static stimuli and moving random dot patterns. Behavioral data and event-related potentials were recorded from 16 boys with ADHD (combined type) and 26 controls (aged 8–15 years). The ADHD group showed less accuracy especially for moving stimuli, and prolonged response times for both stimulus types. Analyses of electrophysiological parameters of cognitive control revealed trends for diminished N2-enhancements and smaller error-negativities (indicating medium effect sizes), and we detected significantly lower error positivities (large effect sizes) compared to controls, similarly for both static and moving stimuli. Taken together, the study supports evidence that motion processing is not fully developed in childhood and that the cognitive control deficit in ADHD is of higher order and independent of stimulus type

    Revisiting motion repulsion: evidence for a general phenomenon?

    Get PDF
    AbstractPrevious studies have found large misperceptions when subjects are reporting the perceived angle between two directions of motion moving transparently at an acute angle, the so called motion repulsion. While these errors have been assumed to be caused by interactions between the two directions present, we reassessed these earlier measurements taking into account recent findings about directional misperceptions affecting the perception of single motion (reference repulsion). While our measurements confirm that errors in directional judgements of transparent motions can indeed be as big as 22° we find that motion repulsion, i.e. the interaction between two directions, contributes at most about 7° to these errors. This value is comparable to similar repulsion effects in orientation perception and stereoscopic depth perception, suggesting that they share a common neural basis. Our data further suggest that fast time scale adaptation and/or more general interactions between neurons contribute to motion repulsion while tracking eye movements play little or no role. These findings should serve as important constraints for models of motion perception

    Feature-based attentional integration of color and visual motion

    Get PDF
    In four variants of a speeded target detection task, we investigated the processing of color and motion signals in the human visual system. Participants were required to attend to both a particular color and direction of motion in moving random dot patterns (RDPs) and to report the appearance of the designated targets. Throughout, reaction times (RTs) to simultaneous presentations of color and direction targets were too fast to be reconciled with models proposing separate and independent processing of such stimulus dimensions. Thus, the data provide behavioral evidence for an integration of color and motion signals. This integration occurred even across superimposed surfaces in a transparent motion stimulus and also across spatial locations, arguing against object-and location-based accounts of attentional selection in such a task. Overall, the pattern of results can be best explained by feature-based mechanisms of visual attention

    Neurons in Primate Visual Cortex Alternate between Responses to Multiple Stimuli in Their Receptive Field

    Get PDF
    A fundamental question concerning representation of the visual world in our brain is how a cortical cell responds when presented with more than a single stimulus. We find supportive evidence that most cells presented with a pair of stimuli respond predominantly to one stimulus at a time, rather than a weighted average response. Traditionally, the firing rate is assumed to be a weighted average of the firing rates to the individual stimuli (response-averaging model) (Bundesen et al., 2005). Here, we also evaluate a probability-mixing model (Bundesen et al., 2005), where neurons temporally multiplex the responses to the individual stimuli. This provides a mechanism by which the representational identity of multiple stimuli in complex visual scenes can be maintained despite the large receptive fields in higher extrastriate visual cortex in primates. We compare the two models through analysis of data from single cells in the middle temporal visual area (MT) of rhesus monkeys when presented with two separate stimuli inside their receptive field with attention directed to one of the two stimuli or outside the receptive field. The spike trains were modeled by stochastic point processes, including memory effects of past spikes and attentional effects, and statistical model selection between the two models was performed by information theoretic measures as well as the predictive accuracy of the models. As an auxiliary measure, we also tested for uni- or multimodality in interspike interval distributions, and performed a correlation analysis of simultaneously recorded pairs of neurons, to evaluate population behavior

    Modeling Human Visual Search Performance on Realistic Webpages Using Analytical and Deep Learning Methods

    Full text link
    Modeling visual search not only offers an opportunity to predict the usability of an interface before actually testing it on real users, but also advances scientific understanding about human behavior. In this work, we first conduct a set of analyses on a large-scale dataset of visual search tasks on realistic webpages. We then present a deep neural network that learns to predict the scannability of webpage content, i.e., how easy it is for a user to find a specific target. Our model leverages both heuristic-based features such as target size and unstructured features such as raw image pixels. This approach allows us to model complex interactions that might be involved in a realistic visual search task, which can not be easily achieved by traditional analytical models. We analyze the model behavior to offer our insights into how the salience map learned by the model aligns with human intuition and how the learned semantic representation of each target type relates to its visual search performance.Comment: the 2020 CHI Conference on Human Factors in Computing System
    corecore