13 research outputs found

    Propping Up Virtual Reality With Haptic Proxies

    Get PDF

    Three-point interaction:combining bi-manual direct touch with gaze

    Get PDF
    The benefits of two-point interaction for tasks that require users to simultaneously manipulate multiple entities or dimensions are widely known. Two-point interaction has become common, e.g., when zooming or pinching using two fingers on a smartphone. We propose a novel interaction technique that implements three-point interaction by augmenting two-finger direct touch with gaze as a third input channel. We evaluate two key characteristics of our technique in two multi-participant user studies. In the first, we used the technique for object selection. In the second, we evaluate it in a 3D matching task that requires simultaneous continuous input from fingers and the eyes. Our results show that in both cases participants learned to interact with three input channels without cognitive or mental overload. Participants' performance tended towards fast selection times in the first study and exhibited parallel interaction in the second. These results are promising and show that there is scope for additional input channels beyond two-point interaction

    Feet movement in desktop 3D interaction

    Get PDF
    In this paper we present an exploratory work on the use of foot movements to support fundamental 3D interaction tasks. Depth cameras such as the Microsoft Kinect are now able to track users' motion unobtrusively, making it possible to draw on the spatial context of gestures and movements to control 3D UIs. Whereas multitouch and mid-air hand gestures have been explored extensively for this purpose, little work has looked at how the same can be accomplished with the feet. We describe the interaction space of foot movements in a seated position and propose applications for such techniques in three-dimensional navigation, selection, manipulation and system control tasks in a 3D modelling context. We explore these applications in a user study and discuss the advantages and disadvantages of this modality for 3D UIs

    Assessing Social Text Placement in Mixed Reality TV

    Get PDF
    TV experiences are often social, be it at-a-distance (through text) or in-person (through speech). Mixed Reality (MR) headsets offer new opportunities to enhance social communication during TV viewing by placing social artifacts (e.g. text) anywhere the viewer wishes, rather than being constrained to a smartphone or TV display. In this paper, we use VR as a test-bed to evaluate different text locations for MR TV specifically. We introduce the concepts of wall messages, below-screen messages, and egocentric messages in addition to state-of-the-art on-screen messages (i.e., subtitles) and controller messages (i.e., reading text messages on the mobile device) to convey messages to users during TV viewing experiences. Our results suggest that a) future MR systems that aim to improve viewers’ experience need to consider the integration of a communication channel that does not interfere with viewers’ primary task, that is watching TV, and b) independent of the location of text messages, users prefer to be in full control of them, especially when reading and responding to them. Our findings pave the way for further investigations towards social at-a-distance communication in Mixed Reality

    HCI and the educational technology revolution #HCIEd2020: a workshop on video-making for teaching and learning human-computer interaction

    No full text
    Universities worldwide have recently increased their adoption of video and educational technologies to continue their provision to students amidst the COVID-19 pandemic. However, the interest in research about facilitating adoption of these technologies long predates this current challenge. Particularly in the context of teaching Human-Computer Interaction, the well-established HCI Educators series has studied the challenges for the teaching and learning of our discipline, with this workshop having become a safe haven for practitioners for the discussion of these challenges and opportunities as its history attests. In this edition of the workshop we maintain our attention to the use of video and other emergent technologies which are being incorporated into HCI education, whilst discussing the pressing needs of its communit

    Select & Apply: understanding how users act upon objects across devices

    Get PDF
    As our interactions increasingly cut across diverse devices, we often encounter situations where we find information on one device but wish to use it on another device for instance a phone number spotted on a public display but wanted on a mobile. We conceptualise this problem as Select & Apply and contribute two user studies where we presented participants with eight different scenarios involving different device combinations, applications and data types. In the first, we used a think-aloud methodology to gain insights on how users currently accomplish such tasks and how they ideally would like to accomplish them. In the second, we conducted a focus group study to investigate which factors influence their actions. Results indicate shortcomings in present support for Select & Apply and contribute a better understanding of which factors affect cross-device interaction.status: publishe

    A Cross-Device Drag-and-Drop Technique

    Get PDF
    Many interactions naturally extend across smart-phones and devices with larger screens. Indeed, data might be received on the mobile but more conveniently processed with an application on a larger device, or vice versa. Such interactions require spontaneous data transfer from a source location on one screen to a target location on the other device. We introduce a cross-device Drag-and-Drop technique to facilitate these interactions involving multiple touchscreen devices, with minimal effort for the user. The technique is a two-handed gesture, where one hand is used to suitably align the mobile phone with the larger screen, while the other is used to select and drag an object between devices and choose which application should receive the data.status: publishe
    corecore