3,020 research outputs found

    Probabilistic modeling of eye movement data during conjunction search via feature-based attention

    Get PDF
    Where the eyes fixate during search is not random; rather, gaze reflects the combination of information about the target and the visual input. It is not clear, however, what information about a target is used to bias the underlying neuronal responses. We here engage subjects in a variety of simple conjunction search tasks while tracking their eye movements. We derive a generative model that reproduces these eye movements and calculate the conditional probabilities that observers fixate, given the target, on or near an item in the display sharing a specific feature with the target. We use these probabilities to infer which features were biased by top-down attention: Color seems to be the dominant stimulus dimension for guiding search, followed by object size, and lastly orientation. We use the number of fixations it took to find the target as a measure of task difficulty. We find that only a model that biases multiple feature dimensions in a hierarchical manner can account for the data. Contrary to common assumptions, memory plays almost no role in search performance. Our model can be fit to average data of multiple subjects or to individual subjects. Small variations of a few key parameters account well for the intersubject differences. The model is compatible with neurophysiological findings of V4 and frontal eye fields (FEF) neurons and predicts the gain modulation of these cells

    Age Differences in Gaze Following : Older Adults Follow Gaze More than Younger Adults When free-viewing Scenes

    Get PDF
    Acknowledgements We thank Teodor Nikolov, Igne Umbrasaite, Bianca Bianciardi, Sarah Kenny, and Vestina Sciaponaite for assistance with stimuli selection and data collection. Funding details This research was supported by Grant RG14082 from the Economic and Social Research Council, awarded to Louise H. Phillips, Benjamin W. Tatler and Julie HenryPeer reviewedPostprin

    Eye movements in surgery: A literature review

    Get PDF
    With recent advances in eye tracking technology, it is now possible to track surgeons’ eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices, developed techniques to assess surgical skill on the basis of eye movements, and examined the role of eye movements in surgical training. We here provide an overview of these studies with a focus on the methodological aspects. We conclude that the different studies of eye movements in surgery suggest that the recording of eye movements may be beneficial both for skill assessment and training purposes, although more research will be needed in this field

    Eye fixation during multiple object attention is based on a representation of discrete spatial foci

    Get PDF
    We often look at and attend to several objects at once. How the brain determines where to point our eyes when we do this is poorly understood. Here we devised a novel paradigm to discriminate between different models of spatial selection guiding fixation. In contrast to standard static attentional tasks where the eye remains fixed at a predefined location, observers selected their own preferred fixation position while they tracked static targets that were arranged in specific geometric configurations and which changed identity over time. Fixations were best predicted by a representation of discrete spatial foci, not a polygonal grouping, simple 2-foci division of attention or a circular spotlight. Moreover, attentional performance was incompatible with serial selection, suggesting that attentional selection and fixation share the same spatial representation. Together with previous findings on fixational microsaccades during covert attention, our results suggest a more nuanced definition of overt vs. covert attention.Publisher PDFPeer reviewe

    Seeing, sensing, and selection: modeling visual perception in complex environments

    Get PDF
    The purpose of this thesis is to investigate human visual perception at the level of eye movements by describing the interaction between vision and action during natural, everyday tasks in a real-world environment. The results of the investigation provide motivation for the development of a biologically-based model of selective visual perception that relies on the relative perceptual conspicuity of certain regions within the field of view. Several experiments were designed and conducted that form the basis for the model. The experiments provide evidence that the visual system is not passive, nor is it general-purpose, but rather it is active and specific, tightly coupled to the requirements of planned behavior and action. The implication for an active and task-specific visual system is that an explicit representation of the environment can be eschewed in favor of a compact representation with large potential savings in computational efficiency. The compact representation is in the form of a topographic map of relative perceptual conspicuity values. Other recent attempts at compact scene representations have focused mainly on low-level maps that code certain salient features of the scene including color, edges, and luminance. This study has found that the low-level maps do not correlate well with subjects\u27 fixation locations, therefore, a map of perceptual conspicuity is presented that incorporates high-level information. The high-level information is in the form of figure/ground segmentation, potential object detection, and task-specific location bias. The resulting model correlates well with the fixation densities of human viewers of natural scenes, and can be used as a pre-processing module for image understanding or intelligent surveillance applications

    Sleep loss and change detection in driving scenes

    Get PDF
    © 2017 Elsevier Ltd. Driver sleepiness is a significant road safety problem. Sleep-related crashes occur on both urban and rural roads, yet to date driver-sleepiness research has focused on understanding impairment in rural and motorway driving. The ability to detect changes is an attention and awareness skill vital for everyday safe driving. Previous research has demonstrated that person states, such as age or motivation, influence susceptibility to change blindness (i.e., failure or delay in detecting changes). The current work considers whether sleepiness increases the likelihood of change blindness within urban and rural driving contexts. Twenty fully-licenced drivers completed a change detection 'flicker' task twice in a counterbalanced design: once following a normal night of sleep (7-8 h) and once following sleep restriction (5 h). Change detection accuracy and response time were recorded while eye movements were continuously tracked. Accuracy was not significantly affected by sleep loss; however, following sleep loss there was some evidence of slowed change detection responses to urban images, but faster responses for rural images. Visual scanning across the images remained consistent between sleep conditions, resulting in no difference in the probability of fixating on the change target. Overall, the results suggest that sleep loss has minimal impact on change detection accuracy and visual scanning for changes in driving scenes. However, a subtle difference in response time to change detection between urban and rural images indicates that change blindness may have implications for sleep-related crashes in more visually complex urban environments. Further research is needed to confirm this finding

    Gaze-based teleprosthetic enables intuitive continuous control of complex robot arm use: Writing & drawing

    Get PDF
    Eye tracking is a powerful mean for assistive technologies for people with movement disorders, paralysis and amputees. We present a highly intuitive eye tracking-controlled robot arm operating in 3-dimensional space based on the user's gaze target point that enables tele-writing and drawing. The usability and intuitive usage was assessed by a “tele” writing experiment with 8 subjects that learned to operate the system within minutes of first time use. These subjects were naive to the system and the task and had to write three letters on a white board with a white board pen attached to the robot arm's endpoint. The instructions are to imagine they were writing text with the pen and look where the pen would be going, they had to write the letters as fast and as accurate as possible, given a letter size template. Subjects were able to perform the task with facility and accuracy, and movements of the arm did not interfere with subjects ability to control their visual attention so as to enable smooth writing. On the basis of five consecutive trials there was a significant decrease in the total time used and the total number of commands sent to move the robot arm from the first to the second trial but no further improvement thereafter, suggesting that within writing 6 letters subjects had mastered the ability to control the system. Our work demonstrates that eye tracking is a powerful means to control robot arms in closed-loop and real-time, outperforming other invasive and non-invasive approaches to Brain-Machine-Interfaces in terms of calibration time (<;2 minutes), training time (<;10 minutes), interface technology costs. We suggests that gaze-based decoding of action intention may well become one of the most efficient ways to interface with robotic actuators - i.e. Brain-Robot-Interfaces - and become useful beyond paralysed and amputee users also for the general teleoperation of robotic and exoskeleton in human augmentation
    corecore