3,245 research outputs found

    Understanding user interactions in stereoscopic head-mounted displays

    Get PDF
    2022 Spring.Includes bibliographical references.Interacting in stereoscopic head mounted displays can be difficult. There are not yet clear standards for how interactions in these environments should be performed. In virtual reality there are a number of well designed interaction techniques; however, augmented reality interaction techniques still need to be improved before they can be easily used. This dissertation covers work done towards understanding how users navigate and interact with virtual environments that are displayed in stereoscopic head-mounted displays. With this understanding, existing techniques from virtual reality devices can be transferred to augmented reality where appropriate, and where that is not the case, new interaction techniques can be developed. This work begins by observing how participants interact with virtual content using gesture alone, speech alone, and the combination of gesture+speech during a basic object manipulation task in augmented reality. Later, a complex 3-dimensional data-exploration environment is developed and refined. That environment is capable of being used in both augmented reality (AR) and virtual reality (VR), either asynchronously or simultaneously. The process of iteratively designing that system and the design choices made during its implementation are provided for future researchers working on complex systems. This dissertation concludes with a comparison of user interactions and navigation in that complex environment when using either an augmented or virtual reality display. That comparison contributes new knowledge on how people perform object manipulations between the two devices. When viewing 3D visualizations, users will need to feel able to navigate the environment. Without careful attention to proper interaction technique design, people may struggle to use the developed system. These struggles may range from a system that is uncomfortable and not fit for long-term use, or they could be as major as causing new users to not being able to interact in these environments at all. Getting the interactions right for AR and VR environments is a step towards facilitating their widespread acceptance. This dissertation provides the groundwork needed to start designing interaction techniques around how people utilize their personal space, virtual space, body, tools, and feedback systems

    Real virtuality: emerging technology for virtually recreating reality

    Get PDF

    Augmented robotics dialog system for enhancing human-robot interaction

    Get PDF
    Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human-robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot's pro-activeness during a human-robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications.The authors gratefully acknowledge the funds provided by the Spanish MICINN (Ministry of Science and Innovation) through the project “Aplicaciones de los robots sociales”, DPI2011-26980 from the Spanish Ministry of Economy and Competitiveness. The research leading to these results has received funding from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. fase III; S2013/MIT-2748), funded by Programas de Actividades I+D en la Comunidad de Madrid and co-funded by the Structural Funds of the EU

    Ambient Intelligence for Next-Generation AR

    Full text link
    Next-generation augmented reality (AR) promises a high degree of context-awareness - a detailed knowledge of the environmental, user, social and system conditions in which an AR experience takes place. This will facilitate both the closer integration of the real and virtual worlds, and the provision of context-specific content or adaptations. However, environmental awareness in particular is challenging to achieve using AR devices alone; not only are these mobile devices' view of an environment spatially and temporally limited, but the data obtained by onboard sensors is frequently inaccurate and incomplete. This, combined with the fact that many aspects of core AR functionality and user experiences are impacted by properties of the real environment, motivates the use of ambient IoT devices, wireless sensors and actuators placed in the surrounding environment, for the measurement and optimization of environment properties. In this book chapter we categorize and examine the wide variety of ways in which these IoT sensors and actuators can support or enhance AR experiences, including quantitative insights and proof-of-concept systems that will inform the development of future solutions. We outline the challenges and opportunities associated with several important research directions which must be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the Springer Handbook of the Metavers

    AR-based Technoself Enhanced Learning Approach to Improving Student Engagement

    Get PDF
    The emerging technologies have expanded a new dimension of self – ‘technoself’ driven by socio-technical innovations and taken an important step forward in pervasive learning. Technology Enhanced Learning (TEL) research has increasingly focused on emergent technologies such as Augmented Reality (AR) for augmented learning, mobile learning, and game-based learning in order to improve self-motivation and self-engagement of the learners in enriched multimodal learning environments. These researches take advantage of technological innovations in hardware and software across different platforms and devices including tablets, phoneblets and even game consoles and their increasing popularity for pervasive learning with the significant development of personalization processes which place the student at the center of the learning process. In particular, augmented reality (AR) research has matured to a level to facilitate augmented learning, which is defined as an on-demand learning technique where the learning environment adapts to the needs and inputs from learners. In this paper we firstly study the role of Technology Acceptance Model (TAM) which is one of the most influential theories applied in TEL on how learners come to accept and use a new technology. Then we present the design methodology of the technoself approach for pervasive learning and introduce technoself enhanced learning as a novel pedagogical model to improve student engagement by shaping personal learning focus and setting. Furthermore we describe the design and development of an AR-based interactive digital interpretation system for augmented learning and discuss key features. By incorporating mobiles, game simulation, voice recognition, and multimodal interaction through Augmented Reality, the learning contents can be geared toward learner's needs and learners can stimulate discovery and gain greater understanding. The system demonstrates that Augmented Reality can provide rich contextual learning environment and contents tailored for individuals. Augment learning via AR can bridge this gap between the theoretical learning and practical learning, and focus on how the real and virtual can be combined together to fulfill different learning objectives, requirements, and even environments. Finally, we validate and evaluate the AR-based technoself enhanced learning approach to enhancing the student motivation and engagement in the learning process through experimental learning practices. It shows that Augmented Reality is well aligned with constructive learning strategies, as learners can control their own learning and manipulate objects that are not real in augmented environment to derive and acquire understanding and knowledge in a broad diversity of learning practices including constructive activities and analytical activities
    corecore