201 research outputs found

    3D head motion, point-of-regard and encoded gaze fixations in real scenes: next-generation portable video-based monocular eye tracking

    Get PDF
    Portable eye trackers allow us to see where a subject is looking when performing a natural task with free head and body movements. These eye trackers include headgear containing a camera directed at one of the subject\u27s eyes (the eye camera) and another camera (the scene camera) positioned above the same eye directed along the subject\u27s line-of-sight. The output video includes the scene video with a crosshair depicting where the subject is looking -- the point-of-regard (POR) -- that is updated for each frame. This video may be the desired final result or it may be further analyzed to obtain more specific information about the subject\u27s visual strategies. A list of the calculated POR positions in the scene video can also be analyzed. The goals of this project are to expand the information that we can obtain from a portable video-based monocular eye tracker and to minimize the amount of user interaction required to obtain and analyze this information. This work includes offline processing of both the eye and scene videos to obtain robust 2D PORs in scene video frames, identify gaze fixations from these PORs, obtain 3D head motion and ray trace fixations through volumes-of-interest (VOIs) to determine what is being fixated, when and where (3D POR). To avoid the redundancy of ray tracing a 2D POR in every video frame and to group these POR data meaningfully, a fixation-identification algorithm is employed to simplify the long list of 2D POR data into gaze fixations. In order to ray trace these fixations, the 3D motion -- position and orientation over time -- of the scene camera is computed. This camera motion is determined via an iterative structure and motion recovery algorithm that requires a calibrated camera and knowledge of the 3D location of at least four points in the scene (that can be selected from premeasured VOI vertices). The subjects 3D head motion is obtained directly from this camera motion. For the final stage of the algorithm, the 3D locations and dimensions of VOIs in the scene are required. This VOI information in world coordinates is converted to camera coordinates for ray tracing. A representative 2D POR position for each fixation is converted from image coordinates to the same camera coordinate system. Then, a ray is traced from the camera center through this position to determine which (if any) VOI is being fixated and where it is being fixated -- the 3D POR in the world. Results are presented for various real scenes. Novel visualizations of portable eye tracker data created using the results of our algorithm are also presented

    Augmented reality and scene examination

    Get PDF
    The research presented in this thesis explores the impact of Augmented Reality on human performance, and compares this technology with Virtual Reality using a head-mounted video-feed for a variety of tasks that relate to scene examination. The motivation for the work was the question of whether Augmented Reality could provide a vehicle for training in crime scene investigation. The Augmented Reality application was developed using fiducial markers in the Windows Presentation Foundation, running on a wearable computer platform; Virtual Reality was developed using the Crytek game engine to present a photo-realistic 3D environment; and a video-feed was provided through head-mounted webcam. All media were presented through head-mounted displays of similar resolution to provide the sole source of visual information to participants in the experiments. The experiments were designed to increase the amount of mobility required to conduct the search task, i.e., from rotation in the horizontal or vertical plane through to movement around a room. In each experiment, participants were required to find objects and subsequently recall their location. It is concluded that human performance is affected not merely via the medium through which the world is perceived but moreover, the constraints governing how movement in the world is controlled

    Eye tracking in optometry: A systematic review

    Get PDF
    This systematic review examines the use of eye-tracking devices in optometry, describing their main characteristics, areas of application and metrics used. Using the PRISMA method, a systematic search was performed of three databases. The search strategy identified 141 reports relevant to this topic, indicating the exponential growth over the past ten years of the use of eye trackers in optometry. Eye-tracking technology was applied in at least 12 areas of the field of optometry and rehabilitation, the main ones being optometric device technology, and the assessment, treatment, and analysis of ocular disorders. The main devices reported on were infrared light-based and had an image capture frequency of 60 Hz to 2000 Hz. The main metrics mentioned were fixations, saccadic movements, smooth pursuit, microsaccades, and pupil variables. Study quality was sometimes limited in that incomplete information was provided regarding the devices used, the study design, the methods used, participants' visual function and statistical treatment of data. While there is still a need for more research in this area, eye-tracking devices should be more actively incorporated as a useful tool with both clinical and research applications. This review highlights the robustness this technology offers to obtain objective information about a person's vision in terms of optometry and visual function, with implications for improving visual health services and our understanding of the vision process

    EyeSee3D 2.0: Model-based Real-time Analysis of Mobile Eye-Tracking in Static and Dynamic Three-Dimensional Scenes

    Get PDF
    Pfeiffer T, Renner P, Pfeiffer-LeĂźmann N. EyeSee3D 2.0: Model-based Real-time Analysis of Mobile Eye-Tracking in Static and Dynamic Three-Dimensional Scenes. In: Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications. New York, NY, USA: ACM Press; 2016: 189-196.With the launch of ultra-portable systems, mobile eye tracking finally has the potential to become mainstream. While eye movements on their own can already be used to identify human activities, such as reading or walking, linking eye movements to objects in the environment provides even deeper insights into human cognitive processing. We present a model-based approach for the identification of fixated objects in three-dimensional environments. For evaluation, we compare the automatic labelling of fixations with those performed by human annotators. In addition to that, we show how the approach can be extended to support moving targets, such as individual limbs or faces of human interaction partners. The approach also scales to studies using multiple mobile eye-tracking systems in parallel. The developed system supports real-time attentive systems that make use of eye tracking as means for indirect or direct human-computer interaction as well as off-line analysis for basic research purposes and usability studies

    Development and evaluation of a virtual environment to assess cycling hazard perception skills

    Get PDF
    Safe cycling requires situational awareness to identify and perceive hazards in the environment to react to and avoid dangerous situations. Concurrently, tending to external distractions leads to a failure to identify hazards or to respond appropriately in a time-constrained manner. Hazard perception training can enhance the ability to identify and react to potential dangers while cycling. Although cycling on the road in the presence of driving cars provides an excellent opportunity to develop and evaluate hazard perception skills, there are obvious ethical and practical risks, requiring extensive resources to facilitate safety, particularly when involving children. Therefore, we developed a Cycling and Hazard Perception virtual reality (VR) simulator (CHP-VR simulator) to create a safe environment where hazard perception can be evaluated and/or trained in a real-time setting. The player interacts in the virtual environment through a stationary bike, where sensors on the bike transfer the player’s position and actions (speed and road positioning) into the virtual environment. A VR headset provides a real-world experience for the player, and a procedural content generation (PCG) algorithm enables the generation of playable artifacts. Pilot data using experienced adult cyclists was collected to develop and evaluate the VR simulator through measuring gaze behavior, both in VR and in situ. A comparable scene (cycling past a parked bus) in VR and in situ was used. In this scenario, cyclists fixated 20% longer at the bus in VR compared to in situ. However, limited agreement identified that the mean differences fell within 95% confidence intervals. The observed differences were likely attributed to a lower number of concurrently appearing elements (i.e., cars) in the VR environment compared with in situ. Future work will explore feasibility testing in young children by increasing assets and incorporating a game scoring system to direct attention to overt and covert hazards

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces
    • …
    corecore