8,113 research outputs found

    GazeDrone: Mobile Eye-Based Interaction in Public Space Without Augmenting the User

    Get PDF
    Gaze interaction holds a lot of promise for seamless human-computer interaction. At the same time, current wearable mobile eye trackers require user augmentation that negatively impacts natural user behavior while remote trackers require users to position themselves within a confined tracking range. We present GazeDrone, the first system that combines a camera-equipped aerial drone with a computational method to detect sidelong glances for spontaneous (calibration-free) gaze-based interaction with surrounding pervasive systems (e.g., public displays). GazeDrone does not require augmenting each user with on-body sensors and allows interaction from arbitrary positions, even while moving. We demonstrate that drone-supported gaze interaction is feasible and accurate for certain movement types. It is well-perceived by users, in particular while interacting from a fixed position as well as while moving orthogonally or diagonally to a display. We present design implications and discuss opportunities and challenges for drone-supported gaze interaction in public

    Towards high-accuracy augmented reality GIS for architecture and geo-engineering

    Get PDF
    L’architecture et la géo-ingénierie sont des domaines où les professionnels doivent prendre des décisions critiques. Ceux-ci requièrent des outils de haute précision pour les assister dans leurs tâches quotidiennes. La Réalité Augmentée (RA) présente un excellent potentiel pour ces professionnels en leur permettant de faciliter l’association des plans 2D/3D représentatifs des ouvrages sur lesquels ils doivent intervenir, avec leur perception de ces ouvrages dans la réalité. Les outils de visualisation s’appuyant sur la RA permettent d’effectuer ce recalage entre modélisation spatiale et réalité dans le champ de vue de l’usager. Cependant, ces systèmes de RA nécessitent des solutions de positionnement en temps réel de très haute précision. Ce n’est pas chose facile, spécialement dans les environnements urbains ou sur les sites de construction. Ce projet propose donc d’investiguer les principaux défis que présente un système de RA haute précision basé sur les panoramas omnidirectionels.Architecture and geo-engineering are application domains where professionals need to take critical decisions. These professionals require high-precision tools to assist them in their daily decision taking process. Augmented Reality (AR) shows great potential to allow easier association between the abstract 2D drawings and 3D models representing infrastructure under reviewing and the actual perception of these objects in the reality. The different visualization tools based on AR allow to overlay the virtual models and the reality in the field of view of the user. However, the architecture and geo-engineering context requires high-accuracy and real-time positioning from these AR systems. This is not a trivial task, especially in urban environments or on construction sites where the surroundings may be crowded and highly dynamic. This project investigates the accuracy requirements of mobile AR GIS as well as the main challenges to address when tackling high-accuracy AR based on omnidirectional panoramas

    Towards End-to-end Video-based Eye-Tracking

    Full text link
    Estimating eye-gaze from images alone is a challenging task, in large parts due to un-observable person-specific factors. Achieving high accuracy typically requires labeled data from test users which may not be attainable in real applications. We observe that there exists a strong relationship between what users are looking at and the appearance of the user's eyes. In response to this understanding, we propose a novel dataset and accompanying method which aims to explicitly learn these semantic and temporal relationships. Our video dataset consists of time-synchronized screen recordings, user-facing camera views, and eye gaze data, which allows for new benchmarks in temporal gaze tracking as well as label-free refinement of gaze. Importantly, we demonstrate that the fusion of information from visual stimuli as well as eye images can lead towards achieving performance similar to literature-reported figures acquired through supervised personalization. Our final method yields significant performance improvements on our proposed EVE dataset, with up to a 28 percent improvement in Point-of-Gaze estimates (resulting in 2.49 degrees in angular error), paving the path towards high-accuracy screen-based eye tracking purely from webcam sensors. The dataset and reference source code are available at https://ait.ethz.ch/projects/2020/EVEComment: Accepted at ECCV 202

    Learning to Personalize in Appearance-Based Gaze Tracking

    Full text link
    Personal variations severely limit the performance of appearance-based gaze tracking. Adapting to these variations using standard neural network model adaptation methods is difficult. The problems range from overfitting, due to small amounts of training data, to underfitting, due to restrictive model architectures. We tackle these problems by introducing the SPatial Adaptive GaZe Estimator (SPAZE). By modeling personal variations as a low-dimensional latent parameter space, SPAZE provides just enough adaptability to capture the range of personal variations without being prone to overfitting. Calibrating SPAZE for a new person reduces to solving a small optimization problem. SPAZE achieves an error of 2.70 degrees with 9 calibration samples on MPIIGaze, improving on the state-of-the-art by 14 %. We contribute to gaze tracking research by empirically showing that personal variations are well-modeled as a 3-dimensional latent parameter space for each eye. We show that this low-dimensionality is expected by examining model-based approaches to gaze tracking. We also show that accurate head pose-free gaze tracking is possible
    • …
    corecore