38 research outputs found

    Exploring the use of hand-to-face input for interacting with head-worn displays

    Get PDF
    International audienceWe propose the use of Hand-to-Face input, a method to interact with head-worn displays (HWDs) that involves contact with the face. We explore Hand-to-Face interaction to find suitable techniques for common mobile tasks. We evaluate this form of interaction with document navigation tasks and examine its social acceptability. In a first study, users identify the cheek and forehead as predominant areas for interaction and agree on gestures for tasks involving continuous input, such as document navigation. These results guide the design of several Hand-to-Face navigation techniques and reveal that gestures performed on the cheek are more efficient and less tiring than interactions directly on the HWD. Initial results on the social acceptability of Hand-to-Face input allow us to further refine our design choices, and reveal unforeseen results: some gestures are considered culturally inappropriate and gender plays a role in selection of specific Hand-to-Face interactions. From our overall results, we provide a set of guidelines for developing effective Hand-to-Face interaction techniques

    Tangible UI by object and material classification with radar

    Get PDF
    Radar signals penetrate, scatter, absorb and reflect energy into proximate objects and ground penetrating and aerial radar systems are well established. We describe a highly accurate system based on a combination of a monostatic radar (Google Soli), supervised machine learning to support object and material classification based Uls. Based on RadarCat techniques, we explore the development of tangible user interfaces without modification of the objects or complex infrastructures. This affords new forms of interaction with digital devices, proximate objects and micro-gestures.Postprin

    Desktop-Gluey: Augmenting Desktop Environments with Wearable Devices

    Get PDF
    International audienceUpcoming consumer-ready head-worn displays (HWDs) can play a central role in unifying the interaction experience in Distributed display environments (DDEs). We recently implemented Gluey, a HWD system that 'glues' together the input mechanisms across a display ecosystem to facilitate content migration and seamless interaction across multiple, co-located devices. Gluey can minimize device switching costs, opening new possibilities and scenarios for multi-device interaction. In this paper, we propose Desktop-Gluey, a system to augment situated desktop environments, allowing users to extend the physical displays in their environment, organize information in spatial layouts, and 'carry' desktop content with them. We extend this metaphor beyond the desktop to provide 'anywhere and anytime' support for mobile and collaborative interactions

    The Effects of Sharing Awareness Cues in Collaborative Mixed Reality

    Get PDF
    Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues

    Multi-scale gestural interaction for augmented reality

    Get PDF
    We present a multi-scale gestural interface for augmented reality applications. With virtual objects, gestural interactions such as pointing and grasping can be convenient and intuitive, however they are imprecise, socially awkward, and susceptible to fatigue. Our prototype application uses multiple sensors to detect gestures from both arm and hand motions (macro-scale), and finger gestures (micro-scale). Micro-gestures can provide precise input through a belt-worn sensor configuration, with the hand in a relaxed posture. We present an application that combines direct manipulation with microgestures for precise interaction, beyond the capabilities of direct manipulation alone.Postprin

    Counterpoint : exploring mixed-scale gesture interaction for AR applications

    Get PDF
    This paper presents ongoing work on a design exploration for mixed-scale gestures, which interleave microgestures with larger gestures for computer interaction. We describe three prototype applications that show various facets of this multi-dimensional design space. These applications portray various tasks on a Hololens Augmented Reality display, using different combinations of wearable sensors. Future work toward expanding the design space and exploration is discussed, along with plans toward evaluation of mixed-scale gesture design.Postprin

    Povijesna, kulturna i vjerska baơtina u Tučepima Fra Nediljko Ơabić, Tučepska sakralna baơtina, Tučepi, 2019., str. 171.

    Get PDF
    Prikaz knige Povijesna, kulturna i vjerska baơtina u Tučepima Fra Nediljko Ơabić, Tučepska sakralna baơtina, Tučepi, 2019

    Ex-Cit XR: Expert-elicitation and validation of Extended Reality visualisation and interaction techniques for disengaging and transitioning users from immersive virtual environments

    Get PDF
    This research explores visualisation and interaction techniques to disengage users from immersive virtual environments (IVEs) and transition them back to the Augmented Reality mode in the real world. To gain a better understanding and novel ideas, we invited eleven Extended Reality (XR) experts to participate in an elicitation study to design such techniques for disengagement. From the elicitation study, we elicited a total of 132 techniques for four different scenarios of IVEs: Narrative-driven, Social-platform, Adventure Sandbox, and Fast-paced Battle experiences. Through extracted keywords and thematic analysis, we classified the elicited techniques into six categories of Activities, Breaks, Cues, Degradations, Notifications, and Virtual Agents. We shared our analyses on users’ intrinsic motivation to engage in different experiences, subjective ratings of four design attributes in designing the disengagement techniques, Positive and Negative Affect Schedules, and user preference. In addition, we gave the design patterns found and illustrated the exemplary user cases of Ex-Cit XR. Finally, we conducted an online survey to preliminarily validate our design recommendations. We proposed the SPINED behavioural manipulation spectrum for XR disengagement to guide how the systems can strategically escalate to disengage users from an IVE

    On-road virtual reality autonomous vehicle (VRAV) simulator : An empirical study on user experience

    Get PDF
    Autonomous-vehicle (AV) technologies are rapidly advancing, but a great deal remains to be learned about their interaction and perception on public roads. Research in this area usually relies on AV trials using naturalistic driving which are expensive with various legal and ethical obstacles designed to keep the general public safe. The emerging concept of Wizard-of-Oz simulation is a promising solution to this problem wherein the driver of a standard vehicle is hidden from the passenger using a physical partition, providing the illusion of riding in an AV. Furthermore, head-mounted display (HMD) virtual reality (VR) has been proposed as a means of providing a Wizard-of-Oz protocol for on-road simulations of AVs. Such systems have potential to support a variety of study conditions at low cost, enabling simulation of a variety of vehicles, driving conditions, and circumstances. However, the feasibility of such systems has yet to be shown. This study makes use of a within-subjects factorial design for examining and evaluating a virtual reality autonomous vehicle (VRAV) system, with the aim of better understanding the differences between stationary and on-road simulations, both with and without HMD VR. More specifically, this study examines the effects on user experience of conditions including presence, arousal, simulator sickness and task workload. Participants indicated a realistic and immersive driving experience as part of subjective evaluation of the VRAV system, indicating the system is a promising tool for human-automation interaction and future AV technology developments.acceptedVersionPeer reviewe

    Grand Challenges in Immersive Analytics

    Get PDF
    The definitive version will be published in CHI 2021, May 8–13, 2021, Yokohama, JapanInternational audienceImmersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and humancomputer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics
    corecore