143,027 research outputs found

    Developmental Changes in Natural Viewing Behavior: Bottom-Up and Top-Down Differences between Children, Young Adults and Older Adults

    Get PDF
    Despite the growing interest in fixation selection under natural conditions, there is a major gap in the literature concerning its developmental aspects. Early in life, bottom-up processes, such as local image feature – color, luminance contrast etc. – guided viewing, might be prominent but later overshadowed by more top-down processing. Moreover, with decline in visual functioning in old age, bottom-up processing is known to suffer. Here we recorded eye movements of 7- to 9-year-old children, 19- to 27-year-old adults, and older adults above 72 years of age while they viewed natural and complex images before performing a patch-recognition task. Task performance displayed the classical inverted U-shape, with young adults outperforming the other age groups. Fixation discrimination performance of local feature values dropped with age. Whereas children displayed the highest feature values at fixated points, suggesting a bottom-up mechanism, older adult viewing behavior was less feature-dependent, reminiscent of a top-down strategy. Importantly, we observed a double dissociation between children and elderly regarding the effects of active viewing on feature-related viewing: Explorativeness correlated with feature-related viewing negatively in young age, and positively in older adults. The results indicate that, with age, bottom-up fixation selection loses strength and/or the role of top-down processes becomes more important. Older adults who increase their feature-related viewing by being more explorative make use of this low-level information and perform better in the task. The present study thus reveals an important developmental change in natural and task-guided viewing

    Studies of visual attention in physics problem solving

    Get PDF
    Doctor of PhilosophyDepartment of PhysicsN. Sanjay RebelloThe work described here represents an effort to understand and influence visual attention while solving physics problems containing a diagram. Our visual system is guided by two types of processes -- top-down and bottom-up. The top-down processes are internal and determined by ones prior knowledge and goals. The bottom-up processes are external and determined by features of the visual stimuli such as color, and luminance contrast. When solving physics problems both top-down and bottom-up processes are active, but to varying degrees. The existence of two types of processes opens several interesting questions for physics education. For example, how do bottom-up processes influence problem solvers in physics? Can we leverage these processes to draw attention to relevant diagram areas and improve problem-solving? In this dissertation we discuss three studies that investigate these open questions and rely on eye movements as a primary data source. We assume that eye movements reflect a person’s moment-to-moment cognitive processes, providing a window into one’s thinking. In our first study, we compared the way correct and incorrect solvers viewed relevant and novice-like elements in a physics problem diagram. We found correct solvers spent more time attending to relevant areas while incorrect solvers spent more time looking at novice-like areas. In our second study, we overlaid these problems with dynamic visual cues to help students’ redirect their attention. We found that in some cases these visual cues improved problem-solving performance and influenced visual attention. To determine more precisely how the perceptual salience of diagram elements influenced solvers’ attention, we conducted a third study where we manipulated the perceptual salience of the diagram elements via changes in luminance contrast. These changes did not influence participants’ answers or visual attention. Instead, similar to our first study, the time spent looking in various areas of the diagram was related to the correctness of an answer. These results suggest that top-down processes dominate while solving physics problems. In sum, the study of visual attention and visual cueing in particular shows that attention is an important component of physics problem-solving and can potentially be leveraged to improve student performance

    The Role of Dopamine in Anticipatory Pursuit Eye Movements: Insights from Genetic Polymorphisms in Healthy Adults

    Get PDF
    There is a long history of eye movement research in patients with psychiatric diseases for which dysfunctions of neurotransmission are considered to be the major pathologic mechanism. However, neuromodulation of oculomotor control is still hardly understood. We aimed to investigate in particular the impact of dopamine on smooth pursuit eye movements. Systematic variability in dopaminergic transmission due to genetic polymorphisms in healthy subjects offers a noninvasive opportunity to determine functional associations. We measured smooth pursuit in 110 healthy subjects genotyped for two well-documented polymorphisms, the COMT Val158Met polymorphism and the SLC6A3 3´-UTR-VNTR polymorphism. Pursuit paradigms were chosen to particularly assess the ability of the pursuit system to initiate tracking when target motion onset is blanked, reflecting the impact of extraretinal signals. In contrast, when following a fully visible target sensory, retinal signals are available. Our results highlight the crucial functional role of dopamine for anticipatory, but not for sensory-driven, pursuit processes. We found the COMT Val158Met polymorphism specifically associated with anticipatory pursuit parameters, emphasizing the dominant impact of prefrontal dopamine activity on complex oculomotor control. In contrast, modulation of striatal dopamine activity by the SLC6A3 3´-UTR-VNTR polymorphism had no significant functional effect. Though often neglected so far, individual differences in healthy subjects provide a promising approach to uncovering functional mechanisms and can be used as a bridge to understanding deficits in patients

    Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli

    Get PDF
    In natural vision both stimulus features and task-demands affect an observer's attention. However, the relationship between sensory-driven (“bottom-up”) and task-dependent (“top-down”) factors remains controversial: Can task-demands counteract strong sensory signals fully, quickly, and irrespective of bottom-up features? To measure attention under naturalistic conditions, we recorded eye-movements in human observers, while they viewed photographs of outdoor scenes. In the first experiment, smooth modulations of contrast biased the stimuli's sensory-driven saliency towards one side. In free-viewing, observers' eye-positions were immediately biased toward the high-contrast, i.e., high-saliency, side. However, this sensory-driven bias disappeared entirely when observers searched for a bull's-eye target embedded with equal probability to either side of the stimulus. When the target always occurred in the low-contrast side, observers' eye-positions were immediately biased towards this low-saliency side, i.e., the sensory-driven bias reversed. Hence, task-demands do not only override sensory-driven saliency but also actively countermand it. In a second experiment, a 5-Hz flicker replaced the contrast gradient. Whereas the bias was less persistent in free viewing, the overriding and reversal took longer to deploy. Hence, insufficient sensory-driven saliency cannot account for the bias reversal. In a third experiment, subjects searched for a spot of locally increased contrast (“oddity”) instead of the bull's-eye (“template”). In contrast to the other conditions, a slight sensory-driven free-viewing bias prevails in this condition. In a fourth experiment, we demonstrate that at known locations template targets are detected faster than oddity targets, suggesting that the former induce a stronger top-down drive when used as search targets. Taken together, task-demands can override sensory-driven saliency in complex visual stimuli almost immediately, and the extent of overriding depends on the search target and the overridden feature, but not on the latter's free-viewing saliency

    Separate representations of target and timing cue locations in the supplementary eye fields

    Get PDF
    When different stimuli indicate where and when to make an eye movement, the brain areas involved in oculomotor control must selectively plan an eye movement to the stimulus that encodes the target position and also encode the information available from the timing cue. This could pose a challenge to the oculomotor system since the representation of the timing stimulus location in one brain area might be interpreted by downstream neurons as a competing motor plan. Evidence from diverse sources has suggested that the supplementary eye fields (SEF) play an important role in behavioral timing, so we recorded single-unit activity from SEF to characterize how target and timing cues are encoded in this region. Two monkeys performed a variant of the memory-guided saccade task, in which a timing stimulus was presented at a randomly chosen eccentric location. Many spatially tuned SEF neurons encoded only the location of the target and not the timing stimulus, whereas several other SEF neurons encoded the location of the timing stimulus and not the target. The SEF population therefore encoded the location of each stimulus with largely distinct neuronal subpopulations. For comparison, we recorded a small population of lateral intraparietal (LIP) neurons in the same task. We found that most LIP neurons that encoded the location of the target also encoded the location of the timing stimulus after its presentation, but selectively encoded the intended eye movement plan in advance of saccade initiation. These results suggest that SEF, by conditionally encoding the location of instructional stimuli depending on their meaning, can help identify which movement plan represented in other oculomotor structures, such as LIP, should be selected for the next eye movement

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Probabilistic modeling of eye movement data during conjunction search via feature-based attention

    Get PDF
    Where the eyes fixate during search is not random; rather, gaze reflects the combination of information about the target and the visual input. It is not clear, however, what information about a target is used to bias the underlying neuronal responses. We here engage subjects in a variety of simple conjunction search tasks while tracking their eye movements. We derive a generative model that reproduces these eye movements and calculate the conditional probabilities that observers fixate, given the target, on or near an item in the display sharing a specific feature with the target. We use these probabilities to infer which features were biased by top-down attention: Color seems to be the dominant stimulus dimension for guiding search, followed by object size, and lastly orientation. We use the number of fixations it took to find the target as a measure of task difficulty. We find that only a model that biases multiple feature dimensions in a hierarchical manner can account for the data. Contrary to common assumptions, memory plays almost no role in search performance. Our model can be fit to average data of multiple subjects or to individual subjects. Small variations of a few key parameters account well for the intersubject differences. The model is compatible with neurophysiological findings of V4 and frontal eye fields (FEF) neurons and predicts the gain modulation of these cells
    corecore