611 research outputs found

    Multi-user Gaze-based Interaction Techniques on Collaborative Touchscreens

    Get PDF
    Eye-gaze is a technology for implicit, fast, and hands-free input for a variety of use cases, with the majority of techniques focusing on single-user contexts. In this work, we present an exploration into gaze techniques of users interacting together on the same surface. We explore interaction concepts that exploit two states in an interactive system: 1) users visually attending to the same object in the UI, or 2) users focusing on separate targets. Interfaces can exploit these states with increasing availability of eye-tracking. For example, to dynamically personalise content on the UI to each user, and to provide a merged or compromised view on an object when both users' gaze are falling upon it. These concepts are explored with a prototype horizontal interface that tracks gaze of two users facing each other. We build three applications that illustrate different mappings of gaze to multi-user support: an indoor map with gaze-highlighted information, an interactive tree-of-life visualisation that dynamically expands on users' gaze, and a worldmap application with gaze-aware fisheye zooming. We conclude with insights from a public deployment of this system, pointing toward the engaging and seamless ways how eye based input integrates into collaborative interaction

    Gaze-based interaction for effective tutoring with social robots

    Get PDF

    Gaze-based Interaction for Virtual Environments

    Get PDF
    Abstract We present an alternative interface that allows users to perceive new sensations in virtual environments. Gaze-based interaction in virtual environments creates the feeling of controlling objects with the mind, arguably translating into a more intense immersion sensation. Additionally, it is also free of some of the most cumbersome aspects of interacting in virtual worlds. By incorporating a real-time physics engine, the sensation of moving something real is further accentuated. We also describe various simple yet effective techniques that allow eyetracking devices to enhance the three-dimensional visualization capabilities of current displays. Some of these techniques have the additional advantage of freeing the mouse from most navigation tasks. This work focuses on the study of existing techniques, a detailed description of the implemented interface and the evaluation (both objective and subjective) of the interface. Given that appropriate filtering of the data from the eye tracker used is a key aspect for the correct functioning of the interface, we will also discuss that aspect in depth

    Interaction par le regard : évaluation du retour d’information progressif

    No full text
    National audienceIn monomodal approaches, eye-tracking for gaze-based interaction suffers from a tight coupling between perception and action: making the distinction between user action and user perception of information is almost impossible. This paper proposes the concept of progressive feedback to release this coupling. First experiments confirm that gaze-based interaction can be credible in some contexts of use. Moreover, progressive feedback appears as possibly valuable.L’utilisation de l’oculométrie pour l’interaction par le regard se caractérise, dans le cas d’une approche monomodale, par un fort couplage entre perception et action : il n’est pas possible, dans le cas général, de distinguer une prise d’information d’une action de l’utilisateur. C’est pourquoi nous proposons une nouvelle approche, basée sur un retour d’information progressif, permettant une réduction de ce couplage. Les premiers résultats expérimentaux sont encourageants et indiquent que l’interaction par le regard peut être une technique d’interaction crédible dans certains contextes d’usage. Ils mettent aussi en lumière l’intérêt du retour d’information progressif

    An investigation into gaze-based interaction techniques for people with motor impairments

    Get PDF
    The use of eye movements to interact with computers offers opportunities for people with impaired motor ability to overcome the difficulties they often face using hand-held input devices. Computer games have become a major form of entertainment, and also provide opportunities for social interaction in multi-player environments. Games are also being used increasingly in education to motivate and engage young people. It is important that young people with motor impairments are able to benefit from, and enjoy, them. This thesis describes a program of research conducted over a 20-year period starting in the early 1990's that has investigated interaction techniques based on gaze position intended for use by people with motor impairments. The work investigates how to make standard software applications accessible by gaze, so that no particular modification to the application is needed. The work divides into 3 phases. In the first phase, ways of using gaze to interact with the graphical user interfaces of office applications were investigated, designed around the limitations of gaze interaction. Of these, overcoming the inherent inaccuracies of pointing by gaze at on-screen targets was particularly important. In the second phase, the focus shifted from office applications towards immersive games and on-line virtual worlds. Different means of using gaze position and patterns of eye movements, or gaze gestures, to issue commands were studied. Most of the testing and evaluation studies in this, like the first, used participants without motor-impairments. The third phase of the work then studied the applicability of the research findings thus far to groups of people with motor impairments, and in particular,the means of adapting the interaction techniques to individual abilities. In summary, the research has shown that collections of specialised gaze-based interaction techniques can be built as an effective means of completing the tasks in specific types of games and how these can be adapted to the differing abilities of individuals with motor impairments

    Gaze-based Interaction on Handheld Mobile Devices

    Get PDF
    With the advancement of smartphone technology, it is now possible for smartphones to run eye-tracking using the front-facing camera, enabling hands-free interaction by empowering mobile users with novel gaze-based input techniques. While several gaze-based interaction techniques have been proposed in the literature, these techniques were deployed in settings different from daily gaze interaction with mobile devices, posing several unique challenges. The user’s holding posture may hinder the camera’s view of their face during the interaction, the front-facing camera may be obstructed by the user’s clothing or hands, or the environment is shaky due to the user’s movements and the dynamic environment. This PhD research investigates the usability of state-of-the-art gaze-based input techniques in mobile settings, develops a novel concept of combining multiple gaze-based techniques, and addresses the challenges imposed by the unique aspects of these devices

    GazeDrone: Mobile Eye-Based Interaction in Public Space Without Augmenting the User

    Get PDF
    Gaze interaction holds a lot of promise for seamless human-computer interaction. At the same time, current wearable mobile eye trackers require user augmentation that negatively impacts natural user behavior while remote trackers require users to position themselves within a confined tracking range. We present GazeDrone, the first system that combines a camera-equipped aerial drone with a computational method to detect sidelong glances for spontaneous (calibration-free) gaze-based interaction with surrounding pervasive systems (e.g., public displays). GazeDrone does not require augmenting each user with on-body sensors and allows interaction from arbitrary positions, even while moving. We demonstrate that drone-supported gaze interaction is feasible and accurate for certain movement types. It is well-perceived by users, in particular while interacting from a fixed position as well as while moving orthogonally or diagonally to a display. We present design implications and discuss opportunities and challenges for drone-supported gaze interaction in public

    VRpursuits: Interaction in Virtual Reality Using Smooth Pursuit Eye Movements

    Get PDF
    Gaze-based interaction using smooth pursuit eye movements (Pursuits) is attractive given that it is intuitive and overcomes the Midas touch problem. At the same time, eye tracking is becoming increasingly popular for VR applications. While Pursuits was shown to be effective in several interaction contexts, it was never explored in-depth for VR before. In a user study (N=26), we investigated how parameters that are specific to VR settings influence the performance of Pursuits. For example, we found that Pursuits is robust against different sizes of virtual 3D targets. However performance improves when the trajectory size (e.g., radius) is larger, particularly if the user is walking while interacting. While walking, selecting moving targets via Pursuits is generally feasible albeit less accurate than when stationary. Finally, we discuss the implications of these findings and the potential of smooth pursuits for interaction in VR by demonstrating two sample use cases: 1) gaze-based authentication in VR, and 2) a space meteors shooting game

    EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays

    Get PDF
    While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user's lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display's sweet spot into a sweet line, and reduces gaze interaction kick-off time to 3.5 seconds -- a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays
    • …
    corecore