23,010 research outputs found

    Eye movements in surgery: A literature review

    Get PDF
    With recent advances in eye tracking technology, it is now possible to track surgeons’ eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices, developed techniques to assess surgical skill on the basis of eye movements, and examined the role of eye movements in surgical training. We here provide an overview of these studies with a focus on the methodological aspects. We conclude that the different studies of eye movements in surgery suggest that the recording of eye movements may be beneficial both for skill assessment and training purposes, although more research will be needed in this field

    The Measurement of Eye Movements in Mild Traumatic Brain Injury: A Structured Review of an Emerging Area

    Get PDF
    Mild traumatic brain injury (mTBI), or concussion, occurs following a direct or indirect force to the head that causes a change in brain function. Many neurological signs and symptoms of mTBI can be subtle and transient, and some can persist beyond the usual recovery timeframe, such as balance, cognitive or sensory disturbance that may pre-dispose to further injury in the future. There is currently no accepted definition or diagnostic criteria for mTBI and therefore no single assessment has been developed or accepted as being able to identify those with an mTBI. Eye-movement assessment may be useful, as specific eye-movements and their metrics can be attributed to specific brain regions or functions, and eye-movement involves a multitude of brain regions. Recently, research has focused on quantitative eye-movement assessments using eye-tracking technology for diagnosis and monitoring symptoms of an mTBI. However, the approaches taken to objectively measure eye-movements varies with respect to instrumentation, protocols and recognition of factors that may influence results, such as cognitive function or basic visual function. This review aimed to examine previous work that has measured eye-movements within those with mTBI to inform the development of robust or standardized testing protocols. Medline/PubMed, CINAHL, PsychInfo and Scopus databases were searched. Twenty-two articles met inclusion/exclusion criteria and were reviewed, which examined saccades, smooth pursuits, fixations and nystagmus in mTBI compared to controls. Current methodologies for data collection, analysis and interpretation from eye-tracking technology in individuals following an mTBI are discussed. In brief, a wide range of eye-movement instruments and outcome measures were reported, but validity and reliability of devices and metrics were insufficiently reported across studies. Interpretation of outcomes was complicated by poor study reporting of demographics, mTBI-related features (e.g., time since injury), and few studies considered the influence that cognitive or visual functions may have on eye-movements. The reviewed evidence suggests that eye-movements are impaired in mTBI, but future research is required to accurately and robustly establish findings. Standardization and reporting of eye-movement instruments, data collection procedures, processing algorithms and analysis methods are required. Recommendations also include comprehensive reporting of demographics, mTBI-related features, and confounding variables

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving
    corecore