399 research outputs found

    Multimodal fusion : gesture and speech input in augmented reality environment

    Get PDF
    Augmented Reality (AR) has the capability to interact with the virtual objects and physical objects simultaneously since it combines the real world with virtual world seamlessly. However, most AR interface applies conventional Virtual Reality (VR) interaction techniques without modification. In this paper we explore the multimodal fusion for AR with speech and hand gesture input. Multimodal fusion enables users to interact with computers through various input modalities like speech, gesture, and eye gaze. At the first stage to propose the multimodal interaction, the input modalities are decided to be selected before be integrated in an interface. The paper presents several related works about to recap the multimodal approaches until it recently has been one of the research trends in AR. It presents the assorted existing works in multimodal for VR and AR. In AR, multimodal considers as the solution to improve the interaction between the virtual and physical entities. It is an ideal interaction technique for AR applications since AR supports interactions in real and virtual worlds in the real-time. This paper describes the recent studies in AR developments that appeal gesture and speech inputs. It looks into multimodal fusion and its developments, followed by the conclusion.This paper will give a guideline on multimodal fusion on how to integrate the gesture and speech inputs in AR environment

    An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures

    Get PDF
    This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and paddle gestures improves the efficiency of user interaction. Finally, we describe some design recommendations for developing other multimodal AR interfaces

    Cross-Dimensional Gestural Interaction Techniques for Hybrid Immersive Environments

    Get PDF
    We present a set of interaction techniques for a hybrid user interface that integrates existing 2D and 3D visualization and interaction devices. Our approach is built around one- and two-handed gestures that support the seamless transition of data between co-located 2D and 3D contexts. Our testbed environment combines a 2D multi-user, multi-touch, projection surface with 3D head-tracked, see-through, head-worn displays and 3D tracked gloves to form a multi-display augmented reality. We also address some of the ways in which we can interact with private data in a collaborative, heterogeneous workspace

    Typing the Future: Designing Multimodal AR Keyboards

    Get PDF
    Recent demonstrations of AR showcase engaging spatial features while avoiding text input. However, this is not due to descending relevance but rather because no satisfactory solution to text input in a comprehensive AR system is available yet. Any novel technological device requires rethinking the way we interact with it, including text input. With its variety of sensors, AR devices offer numerous possibilities for uni- and multimodal interaction. However, it is essential to evaluate the actual problem space before suggesting solutions. In our design science research project, we aim to create design knowledge about the learnability and performance of AR keyboards. Based on transfer of learning theory and HCI literature on virtual keyboards, we propose meta requirements and initial design principles that serve as basis for developing a multimodal AR keyboard prototype

    Augmented reality environmental monitoring using wireless sensor networks

    Get PDF
    Environmental monitoring brings many challenges to wireless sensor networks: including the need to collect and process large volumes of data before presenting the information to the user in an easy to understand format. This paper presents SensAR, a prototype augmented reality interface specifically designed for monitoring environmental information. The input of our prototype is sound and temperature data which are located inside a networked environment. Participants can visualise 3D as well as textual representations of environmental information in real-time using a lightweight handheld computer

    Integrating virtual reality and augmented reality in a collaborative user interface

    Get PDF
    Application that adopts collaborative system allows multiple users to interact with other users in the same virtual space either in Virtual Reality (VR) or Augmented Reality (AR). This paper aims to integrate the VR and AR space in a Collaborative User Interface that enables the user to cooperate with other users in a different type of interfaces in a single shared space manner. The gesture interaction technique is proposed as the interaction tool in both of the virtual spaces as it can provide a more natural gesture interaction when interacting with the virtual object. The integration of VR and AR space provide a cross-discipline shared data interchange through the network protocol of client-server architecture
    corecore