4 research outputs found

    Combining Direct and Indirect Touch Input for Interactive Workspaces using Gaze Input

    No full text
    Interactive workspaces combine horizontal and vertical touch sur-faces into a single digital workspace. During an exploration of these systems, it was shown that direct interaction on the vertical surface is cumbersome and more inaccurate than on the horizontal one. To overcome these problems, indirect touch systems turn the horizontal touch surface into an input devices that allows manip-ulation of objects on the vertical display. If the horizontal touch surface also acts as a display, however, it becomes necessary to dis-tinguish which screen is currently in use by providing a switching mode. We investigate the use of gaze tracking to perform these mode switches. In three user studies we compare absolute and relative gaze augmented selection techniques with the traditional direct-touch approach. Our results show that our relative gaze aug-mented selection technique outperforms the other techniques for simple tapping tasks alternating between horizontal and vertical surfaces, and for dragging on the vertical surface. However, when tasks involve dragging across surfaces, the findings are more com-plex. We provide a detailed description of the proposed interaction techniques, a statistical analysis of these interaction techniques, and how they can be applied to systems that involve a combination of multiple horizontal and vertical touch surfaces

    GazeForm: Dynamic Gaze-adaptive Touch Surface for Eyes-free Interaction in Airliner Cockpits

    Get PDF
    An increasing number of domains, including aeronautics, are adopting touchscreens. However, several drawbacks limit their operational use, in particular, eyes-free interaction is almost impossible making it difficult to perform other tasks simultaneously. We introduce GazeForm, an adaptive touch interface with shape-changing capacity that offers an adapted interaction modality according to gaze direction. When the user’s eyes are focused on interaction, the surface is flat and the system acts as a touchscreen. When eyes are directed towards another area, physical knobs emerge from the surface. Compared to a touch only mode, experimental results showed that GazeForm generated a lower subjective mental workload and a higher efficiency of execution (20% faster). Furthermore, GazeForm required less visual attention and participants were able to concentrate more on a secondary monitoring task. Complementary interviews with pilots led us to explore timings and levels of control for using gaze to adapt modality

    Expanding the bounds of seated virtual workspaces

    Get PDF
    Mixed Reality (MR), Augmented Reality (AR) and Virtual Reality (VR) headsets can improve upon existing physical multi-display environments by rendering large, ergonomic virtual display spaces whenever and wherever they are needed. However, given the physical and ergonomic limitations of neck movement, users may need assistance to view these display spaces comfortably. Through two studies, we developed new ways of minimising the physical effort and discomfort of viewing such display spaces. We first explored how the mapping between gaze angle and display position could be manipulated, helping users view wider display spaces than currently possible within an acceptable and comfortable range of neck movement. We then compared our implicit control of display position based on head orientation against explicit user control, finding significant benefits in terms of user preference, workload and comfort for implicit control. Our novel techniques create new opportunities for productive work by leveraging MR headsets to create interactive wide virtual workspaces with improved comfort and usability. These workspaces are flexible and can be used on-the-go, e.g., to improve remote working or make better use of commuter journeys
    corecore