79 research outputs found

    Models for gaze tracking systems

    Get PDF
    One of the most confusing aspects that one meets when introducing oneself into gaze tracking technology is the wide variety, in terms of hardware equipment, of available systems that provide solutions to the same matter, that is, determining the point the subject is looking at. The calibration process permits generally adjusting nonintrusive trackers based on quite different hardware and image features to the subject. The negative aspect of this simple procedure is that it permits the system to work properly but at the expense of a lack of control over the intrinsic behavior of the tracker. The objective of the presented article is to overcome this obstacle to explore more deeply the elements of a video-oculographic system, that is, eye, camera, lighting, and so forth, from a purely mathematical and geometrical point of view. The main contribution is to find out the minimum number of hardware elements and image features that are needed to determine the point the subject is looking at. A model has been constructed based on pupil contour and multiple lighting, and successfully tested with real subjects. On the other hand, theoretical aspects of video-oculographic systems have been thoroughly reviewed in order to build a theoretical basis for further studies

    Geometry Issues of Gaze Estimation

    Get PDF

    Estimation of a focused object using a corneal surface image for eye-based interaction

    Get PDF
    Researchers are considering the use of eye tracking in head-mounted camera systems, such as Google’s Project Glass. Typical methods require detailed calibration in advance, but long periods of use disrupt the calibration record between the eye and the scene camera. In addition, the focused object might not be estimated even if the point-of-regard is estimated using a portable eye-tracker. Therefore, we propose a novel method for estimating the object that a user is focused upon, where an eye camera captures the reflection on the corneal surface. Eye and environment information can be extracted from the corneal surface image simultaneously. We use inverse ray tracing to rectify the reflected image and a scale-invariant feature transform to estimate the object where the point-of-regard is located. Unwarped images can also be generated continuously from corneal surface images. We consider that our proposed method could be applied to a guidance system and we confirmed the feasibility of this application in experiments that estimated the object focused upon and the point-of-regard

    Using Priors to Improve Head-Mounted Eye Trackers in Sports

    Get PDF

    Objective measurement of nine gaze-directions using an eye-tracking device

    Get PDF
    Purpose: To investigate the usefulness and efficacy of a novel eye-tracking device that can objectively measure nine gaze-directions. Methods: We measured each of the nine gaze-directions subjectively, using a conventional Hess screen test, and objectively, using the nine gaze-direction measuring device, and de-termined the correlation, addition error, and proportional error. We obtained two consecu-tive measurements of the nine gaze-directions using the newly developed device in healthy young people with exophoria and investigated the reproducibility of the measurements. We further measured the nine gaze-directions using a Hess screen test and the newly developed device in three subjects with cover test-based strabismus and compared the results. Results: We observed that the objective measurements obtained with the newly developed gaze-direction measuring device had significant correlation and addition error compared to the conventional subjective method, and we found no proportional error. These measure-ments had good reproducibility. Conclusion: The novel device can be used to observe delayed eye movement associated with limited eye movement in the affected eye, as well as the associated excessive movement of the healthy eye in patients with strabismus, similar to the Hess screen test. This is a useful device that can provide objective measurements of nine gaze-directions

    AUTOMATIC PERFORMANCE LEVEL ASSESSMENT IN MINIMALLY INVASIVE SURGERY USING COORDINATED SENSORS AND COMPOSITE METRICS

    Get PDF
    Skills assessment in Minimally Invasive Surgery (MIS) has been a challenge for training centers for a long time. The emerging maturity of camera-based systems has the potential to transform problems into solutions in many different areas, including MIS. The current evaluation techniques for assessing the performance of surgeons and trainees are direct observation, global assessments, and checklists. These techniques are mostly subjective and can, therefore, involve a margin of bias. The current automated approaches are all implemented using mechanical or electromagnetic sensors, which suffer limitations and influence the surgeon’s motion. Thus, evaluating the skills of the MIS surgeons and trainees objectively has become an increasing concern. In this work, we integrate and coordinate multiple camera sensors to assess the performance of MIS trainees and surgeons. This study aims at developing an objective data-driven assessment that takes advantage of multiple coordinated sensors. The technical framework for the study is a synchronized network of sensors that captures large sets of measures from the training environment. The measures are then, processed to produce a reliable set of individual and composed metrics, coordinated in time, that suggest patterns of skill development. The sensors are non-invasive, real-time, and coordinated over many cues such as, eye movement, external shots of body and instruments, and internal shots of the operative field. The platform is validated by a case study of 17 subjects and 70 sessions. The results show that the platform output is highly accurate and reliable in detecting patterns of skills development and predicting the skill level of the trainees

    Just Gaze and Wave: Exploring the Use of Gaze and Gestures for Shoulder-surfing Resilient Authentication

    Get PDF
    Eye-gaze and mid-air gestures are promising for resisting various types of side-channel attacks during authentication. However, to date, a comparison of the different authentication modalities is missing. We investigate multiple authentication mechanisms that leverage gestures, eye gaze, and a multimodal combination of them and study their resilience to shoulder surfing. To this end, we report on our implementation of three schemes and results from usability and security evaluations where we also experimented with fixed and randomized layouts. We found that the gaze-based approach outperforms the other schemes in terms of input time, error rate, perceived workload, and resistance to observation attacks, and that randomizing the layout does not improve observation resistance enough to warrant the reduced usability. Our work further underlines the significance of replicating previous eye tracking studies using today's sensors as we show significant improvement over similar previously introduced gaze-based authentication systems

    Eye tracking and avatar-mediated communication in immersive collaborative virtual environments

    Get PDF
    The research presented in this thesis concerns the use of eye tracking to both enhance and understand avatar-mediated communication (AMC) performed by users of immersive collaborative virtual environment (ICVE) systems. AMC, in which users are embodied by graphical humanoids within a shared virtual environment (VE), is rapidly emerging as a prevalent and popular form of remote interaction. However, compared with video-mediated communication (VMC), which transmits interactants’ actual appearance and behaviour, AMC fails to capture, transmit, and display many channels of nonverbal communication (NVC). This is a significant hindrance to the medium’s ability to support rich interpersonal telecommunication. In particular, oculesics (the communicative properties of the eyes), including gaze, blinking, and pupil dilation, are central nonverbal cues during unmediated social interaction. This research explores the interactive and analytical application of eye tracking to drive the oculesic animation of avatars during real-time communication, and as the primary method of experimental data collection and analysis, respectively. Three distinct but interrelated questions are addressed. First, the thesis considers the degree to which quality of communication may be improved through the use of eye tracking, to increase the nonverbal, oculesic, information transmitted during AMC. Second, the research asks whether users engaged in AMC behave and respond in a socially realistic manner in comparison with VMC. Finally, the degree to which behavioural simulations of oculesics can both enhance the realism of virtual humanoids, and complement tracked behaviour in AMC, is considered. These research questions were investigated over a series of telecommunication experiments investigating scenarios common to computer supported cooperative work (CSCW), and a further series of experiments investigating behavioural modelling for virtual humanoids. The first, exploratory, telecommunication experiment compared AMC with VMC in a three-party conversational scenario. Results indicated that users employ gaze similarly when faced with avatar and video representations of fellow interactants, and demonstrated how interaction is influenced by the technical characteristics and limitations of a medium. The second telecommunication experiment investigated the impact of varying methods of avatar gaze control on quality of communication during object-focused multiparty AMC. The main finding of the experiment was that quality of communication is reduced when avatars demonstrate misleading gaze behaviour. The final telecommunication study investigated truthful and deceptive dyadic interaction in AMC and VMC over two closely-related experiments. Results from the first experiment indicated that users demonstrate similar oculesic behaviour and response in both AMC and VMC, but that psychological arousal is greater following video-based interaction. Results from the second experiment found that the use of eye tracking to drive the oculesic behaviour of avatars during AMC increased the richness of NVC to the extent that more accurate estimation of embodied users’ states of veracity was enabled. Rather than directly investigating AMC, the second series of experiments addressed behavioural modelling of oculesics for virtual humanoids. Results from the these experiments indicated that oculesic characteristics are highly influential to the perceived realism of virtual humanoids, and that behavioural models are able to complement the use of eye tracking in AMC. The research presented in this thesis explores AMC and eye tracking over a range of collaborative and perceptual studies. The overall conclusion is that eye tracking is able to enhance AMC towards a richer medium for interpersonal telecommunication, and that users’ behaviour in AMC is no less socially ‘real’ than that demonstrated in VMC. However, there are distinct differences between the two communication mediums, and the importance of matching the characteristics of a planned communication with those of the medium itself is critical
    • …
    corecore