8 research outputs found

    Exploring Gaze for Assisting Freehand Selection-based Text Entry in AR

    Get PDF
    With eye-tracking increasingly available in Augmented Reality, we explore how gaze can be used to assist freehand gestural text entry. Here the eyes are often coordinated with manual input across the spatial positions of the keys. Inspired by this, we investigate gaze-assisted selection-based text entry through the concept of spatial alignment of both modalities. Users can enter text by aligning both gaze and manual pointer at each key, as a novel alternative to existing dwell-time or explicit manual triggers. We present a text entry user study comparing two of such alignment techniques to a gaze-only and a manual-only baseline. The results show that one alignment technique reduces physical finger movement by more than half compared to standard in-air finger typing, and is faster and exhibits less perceived eye fatigue than an eyes-only dwell-time technique. We discuss trade-offs between uni and multimodal text entry techniques, pointing to novel ways to integrate eye movements to facilitate virtual text entry

    Gaze-Hand Alignment:Combining Eye Gaze and Mid-Air Pointing for Interacting with Menus in Augmented Reality

    Get PDF
    Gaze and freehand gestures suit Augmented Reality as users can interact with objects at a distance without need for a separate input device. We propose Gaze-Hand Alignment as a novel multimodal selection principle, defined by concurrent use of both gaze and hand for pointing and alignment of their input on an object as selection trigger. Gaze naturally precedes manual action and is leveraged for pre-selection, and manual crossing of a pre-selected target completes the selection. We demonstrate the principle in two novel techniques, Gaze&Finger for input by direct alignment of hand and finger raised into the line of sight, and Gaze&Hand for input by indirect alignment of a cursor with relative hand movement. In a menu selection experiment, we evaluate the techniques in comparison with Gaze&Pinch and a hands-only baseline. The study showed the gaze-assisted techniques to outperform hands-only input, and gives insight into trade-offs in combining gaze with direct or indirect, and spatial or semantic freehand gestures

    Partially Blended Realities:Aligning Dissimilar Spaces for Distributed Mixed Reality Meetings

    Get PDF
    Mixed Reality allows for distributed meetings where people's local physical spaces are virtually aligned into blended interaction spaces. In many cases, people's physical rooms are dissimilar, making it challenging to design a coherent blended space. We introduce the concept of Partially Blended Realities (PBR) — using Mixed Reality to support remote collaborators in partially aligning their physical spaces. As physical surfaces are central in collaborative work, PBR supports users in transitioning between different configurations of tables and whiteboard surfaces. In this paper, we 1) describe the design space of PBR, 2) present RealityBlender to explore interaction techniques for how users may configure and transition between blended spaces, and 3) provide insights from a study on how users experience transitions in a remote collaboration task. With this work, we demonstrate new potential for using partial solutions to tackle the alignment problem of dissimilar spaces in distributed Mixed Reality meetings

    A Fitts’ Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces

    No full text
    Gaze-Hand Alignment has recently been proposed for multimodal selection in 3D. The technique takes advantage of gaze for target pre-selection, as it naturally precedes manual input. Selection is then completed when manual input aligns with gaze on the target, without need for an additional click method. In this work we evaluate two alignment techniques, Gaze&Finger and Gaze&Handray, combining gaze with image plane pointing versus raycasting, in comparison with hands-only baselines and Gaze&Pinch as established multimodal technique. We used Fitts’ Law study design with targets presented at different depths in the visual scene, to assess effect of parallax on performance. The alignment techniques outperformed their respective hands-only baselines. Gaze&Finger is efficient when targets are close to the image plane but less performant with increasing target depth due to parallax

    Reality and Beyond:Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality

    No full text
    Augmented Reality (AR) has shown great potential for supporting co-located collaboration. Yet, it is rarely articulated in the design rationales of AR systems that they promote a certain socio-spatial configuration of the users. Learning from proxemics, we argue that such configurations enable and constrain different co-located spatial behaviors with consequences for collaborative activities. We focus specifically on enabling different collaboration styles via the design of Handheld Collaborative Augmented Reality (HCAR) systems. Drawing upon notions of proxemics, we show how different HCAR designs enable different socio-spatial configurations. Through a design exploration, we demonstrate interaction techniques to expand on the notion of collaborative coupling styles by either deliberately designing for aligning with physical reality or going beyond. The main contributions are a proxemics-based conceptual lens and vocabulary for supporting interaction designers in being mindful of the proxemic consequences when developing handheld multi-user AR systems

    Mirrorverse: Live Tailoring of Video Conferencing Interfaces

    No full text
    International audienceHow can we let users adapt video-based meetings as easily as they rearrange furniture in a physical meeting room? We describe a design space for video conferencing systems that includes a five-step “ladder of tailorability,” from minor adjustments to live reprogramming of the interface. We then present Mirrorverse and show how it applies the principles of computational media to support live tailoring of video conferencing interfaces to accommodate highly diverse meeting situations. We present multiple use scenarios, including avirtual workshop, an online yoga class, and a stand-up team meeting to evaluate the approach and demonstrate its potential for new, remote meetings with fluid transitions across activities
    corecore