4 research outputs found

    On high-precision chessboard detection on static scene videos from mobile eye-tracking devices

    Get PDF
    Eye-tracking analysis require annotation of a scene video by information on the target of the gaze. We develop a technique for automatic high-precision scene annotation for mobile head-mounted eye-trackers. The solution combines computer-vision techniques for scene recognition, 3-D modeling of the recognized objects, and head movement compensation using gyroscope and accelerometer information provided by the eye-tracking device. In this paper we address the problem of recognition of a chessboard, while the approach may be applied to other situations with static scenes

    Mobile gaze tracking system for outdoor walking behavioral studies

    Get PDF
    Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a headmounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.38 while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments

    Automatic Analysis of 3D Gaze Coordinates on Scene Objects Using Data From Eye-Tracking and Motion Capture Systems

    No full text
    Essig K, Prinzhorn D, Maycock J, Dornbusch D, Ritter H, Schack T. Automatic Analysis of 3D Gaze Coordinates on Scene Objects Using Data From Eye-Tracking and Motion Capture Systems. Presented at the Proceedings of the 2012 Symposium on Eye-Tracking Research and Applications, ETRA 2012, Santa Barbara, California, USA.We present a method which removes the need for manual annotation of eye-movement data. Our software produces as output object and subject specific results for various eye tracking parameters in complex 3D scenes. We synchronized a monocular mobile eyetracking system with a VICON motion-capture system. Combining the data of both systems, we calculate and visualize a 3D gaze vector within the VICON coordinate frame of reference. By placing markers on objects and subjects in the scene, we can automatically compute how many times and where fixations occurred. We evaluated our approach by comparing its outcome for a calibration and a grasping task (with three objects: cup, stapler, sphere) against the average results given by the manual annotation. Preliminary data reveals that the program only differs from the average manual annotation results by approximately 3 percent in case of the calibration procedure, where the gaze is subsequently directed towards five different markers on a board, without jumps between them. In case of the more complicated grasping videos the results depend on the object size: for bigger objects (i.e., sphere) the differences in the number of fixations are very small and the cumulative fixaton duration deviates by less than 16 percent (or 950ms). For smaller objects, where there are more saccades towards object boundaries, the differences are bigger. For one reason manual annotation becomes inevitably more subjective; on the other hand both methods analyze the 3D scene from slightly different perspectives (i.e., center of eyeball versus position of scene camera). Although, even then the automatic results come close to those of a manual annotation (the average differences are 984ms and 399ms for the object and hand, respectively) and reflect the fixation distribution when interacting with objects in 3D scenes. Thus, eye-hand coordination experiments with various objects in complex 3D scenes, especially with bigger and moving objects, can now be realized fast and effectively. Our approach allows the recording of eye-, head-, and grasping movements when subjects interact with objects or systems. This allows us to study the relation between gaze and hand movements when people grasp and manipulate objects or indeed free movements in normal gaze behavior. The automatic analysis of gaze and movement data in complex 3D scenes can be applied to a variety of research domains, i.e., Human Computer Interaction, Virtual reality or grasping and gesture research

    Gaze and Peripheral Vision Analysis for Human-Environment Interaction: Applications in Automotive and Mixed-Reality Scenarios

    Get PDF
    This thesis studies eye-based user interfaces which integrate information about the user’s perceptual focus-of-attention into multimodal systems to enrich the interaction with the surrounding environment. We examine two new modalities: gaze input and output in the peripheral field of view. All modalities are considered in the whole spectrum of the mixed-reality continuum. We show the added value of these new forms of multimodal interaction in two important application domains: Automotive User Interfaces and Human-Robot Collaboration. We present experiments that analyze gaze under various conditions and help to design a 3D model for peripheral vision. Furthermore, this work presents several new algorithms for eye-based interaction, like deictic reference in mobile scenarios, for non-intrusive user identification, or exploiting the peripheral field view for advanced multimodal presentations. These algorithms have been integrated into a number of software tools for eye-based interaction, which are used to implement 15 use cases for intelligent environment applications. These use cases cover a wide spectrum of applications, from spatial interactions with a rapidly changing environment from within a moving vehicle, to mixed-reality interaction between teams of human and robots.In dieser Arbeit werden blickbasierte Benutzerschnittstellen untersucht, die Infor- mationen ¨uber das Blickfeld des Benutzers in multimodale Systeme integrieren, um neuartige Interaktionen mit der Umgebung zu erm¨oglichen. Wir untersuchen zwei neue Modalit¨aten: Blickeingabe und Ausgaben im peripheren Sichtfeld. Alle Modalit¨aten werden im gesamten Spektrum des Mixed-Reality-Kontinuums betra- chtet. Wir zeigen die Anwendung dieser neuen Formen der multimodalen Interak- tion in zwei wichtigen Dom¨anen auf: Fahrerassistenzsysteme und Werkerassistenz bei Mensch-Roboter-Kollaboration. Wir pr¨asentieren Experimente, die blickbasierte Benutzereingaben unter verschiedenen Bedingungen analysieren und helfen, ein 3D- Modell f¨ur das periphere Sehen zu entwerfen. Dar¨uber hinaus stellt diese Arbeit mehrere neue Algorithmen f¨ur die blickbasierte Interaktion vor, wie die deiktis- che Referenz in mobilen Szenarien, die nicht-intrusive Benutzeridentifikation, oder die Nutzung des peripheren Sichtfeldes f¨ur neuartige multimodale Pr¨asentationen. Diese Algorithmen sind in eine Reihe von Software-Werkzeuge integriert, mit de- nen 15 Anwendungsf¨alle f¨ur intelligente Umgebungen implementiert wurden. Diese Demonstratoren decken ein breites Anwendungsspektrum ab: von der r¨aumlichen In- teraktionen aus einem fahrenden Auto heraus bis hin zu Mixed-Reality-Interaktionen zwischen Mensch-Roboter-Teams
    corecore