2,679 research outputs found

    How to Build an Embodiment Lab: Achieving Body Representation Illusions in Virtual Reality

    Get PDF
    Advances in computer graphics algorithms and virtual reality (VR) systems, together with the reduction in cost of associated equipment, have led scientists to consider VR as a useful tool for conducting experimental studies in fields such as neuroscience and experimental psychology. In particular virtual body ownership, where the feeling of ownership over a virtual body is elicited in the participant, has become a useful tool in the study of body representation, in cognitive neuroscience and psychology, concerned with how the brain represents the body. Although VR has been shown to be a useful tool for exploring body ownership illusions, integrating the various technologies necessary for such a system can be daunting. In this paper we discuss the technical infrastructure necessary to achieve virtual embodiment. We describe a basic VR system and how it may be used for this purpose, and then extend this system with the introduction of real-time motion capture, a simple haptics system and the integration of physiological and brain electrical activity recordings

    Multimodal teaching, learning and training in virtual reality: a review and case study

    Get PDF
    It is becoming increasingly prevalent in digital learning research to encompass an array of different meanings, spaces, processes, and teaching strategies for discerning a global perspective on constructing the student learning experience. Multimodality is an emergent phenomenon that may influence how digital learning is designed, especially when employed in highly interactive and immersive learning environments such as Virtual Reality (VR). VR environments may aid students' efforts to be active learners through consciously attending to, and reflecting on, critique leveraging reflexivity and novel meaning-making most likely to lead to a conceptual change. This paper employs eleven industrial case-studies to highlight the application of multimodal VR-based teaching and training as a pedagogically rich strategy that may be designed, mapped and visualized through distinct VR-design elements and features. The outcomes of the use cases contribute to discern in-VR multimodal teaching as an emerging discourse that couples system design-based paradigms with embodied, situated and reflective praxis in spatial, emotional and temporal VR learning environments

    A multilevel model for movement rehabilitation in Traumatic Brain Injury (TBI) using virtual environments

    Get PDF
    This paper presents a conceptual model for movement rehabilitation of traumatic brain injury (TBI) using virtual environments. This hybrid model integrates principles from ecological systems theory with recent advances in cognitive neuroscience, and supports a multilevel approach to both assessment and treatment. Performance outcomes at any stage of recovery are determined by the interplay of task, individual, and environmental/contextual factors. We argue that any system of rehabilitation should provide enough flexibility for task and context factors to be varied systematically, based on the current neuromotor and biomechanical capabilities of the performer or patient. Thus, in order to understand how treatment modalities are to be designed and implemented, there is a need to understand the function of brain systems that support learning at a given stage of recovery, and the inherent plasticity of the system. We know that virtual reality (VR) systems allow training environments to be presented in a highly automated, reliable, and scalable way. Presentation of these virtual environments (VEs) should permit movement analysis at three fundamental levels of behaviour: (i) neurocognitive bases of performance (we focus in particular on the development and use of internal models for action which support adaptive, on-line control); (ii) movement forms and patterns that describe the patients' movement signature at a given stage of recovery (i.e, kinetic and kinematic markers of movement proficiency), (iii) functional outcomes of the movement. Each level of analysis can also map quite seamlessly to different modes of treatment. At the neurocognitive level, for example, semi-immersive VEs can help retrain internal modeling processes by reinforcing the patients' sense of multimodal space (via augmented feedback), their position within it, and the ability to predict and control actions flexibly (via movement simulation and imagery training). More specifically, we derive four - key therapeutic environment concepts (or Elements) presented using VR technologies: Embodiment (simulation and imagery), Spatial Sense (augmenting position sense), Procedural (automaticity and dual-task control), and Participatory (self-initiated action). The use of tangible media/objects, force transduction, and vision-based tracking systems for the augmentation of gestures and physical presence will be discussed in this context

    Virtual Community Heritage:An Immersive Approach to Community Heritage

    Get PDF
       Our relationship with cultural heritage has been transformed by digital technologies. Opportunities have emerged to preserve and access cultural heritage material while engaging an audience at both regional and global level. Accessibility of technology has enabled audiences to participate in digital heritage curation process. Participatory practices and co-production methodologies have created new relationships between museums and communities, as they are engaged to become active participants in the co-design and co-creation of heritage material. Audiences are more interested in experiences vs services nowadays and museums and heritage organisations have potential to entertain while providing engaging experiences beyond their physical walls. Mixed reality is an emerging method of engagement that has allowed enhanced interaction beyond traditional 3D visualisation models into fully immersive worlds. There is potential to transport audiences to past worlds that enhance their experience and understanding of cultural heritage

    A comparison of immersive realities and interaction methods: cultural learning in virtual heritage

    Get PDF
    In recent years, Augmented Reality (AR), Virtual Reality (VR), Augmented Virtuality (AV), and Mixed Reality (MxR) have become popular immersive reality technologies for cultural knowledge dissemination in Virtual Heritage (VH). These technologies have been utilized for enriching museums with a personalized visiting experience and digital content tailored to the historical and cultural context of the museums and heritage sites. Various interaction methods, such as sensor-based, device-based, tangible, collaborative, multimodal, and hybrid interaction methods, have also been employed by these immersive reality technologies to enable interaction with the virtual environments. However, the utilization of these technologies and interaction methods isn’t often supported by a guideline that can assist Cultural Heritage Professionals (CHP) to predetermine their relevance to attain the intended objectives of the VH applications. In this regard, our paper attempts to compare the existing immersive reality technologies and interaction methods against their potential to enhance cultural learning in VH applications. To objectify the comparison, three factors have been borrowed from existing scholarly arguments in the Cultural Heritage (CH) domain. These factors are the technology’s or the interaction method’s potential and/or demonstrated capability to: (1) establish a contextual relationship between users, virtual content, and cultural context, (2) allow collaboration between users, and (3) enable engagement with the cultural context in the virtual environments and the virtual environment itself. Following the comparison, we have also proposed a specific integration of collaborative and multimodal interaction methods into a Mixed Reality (MxR) scenario that can be applied to VH applications that aim at enhancing cultural learning in situ

    Enhancing interaction in mixed reality

    Get PDF
    With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen. Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte RealitĂ€ten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt ĂŒber. Mit der EinfĂŒhrung von leistungsfĂ€higen Mixed-Reality-GerĂ€ten verĂ€ndert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende GerĂ€te sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, mĂŒssen Designer und Entwicklerinnen die kĂŒnftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation prĂ€sentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien fĂŒr die Gestaltung von solchen. Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von ModalitĂ€ten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenĂŒber Mixed Reality in hĂ€uslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der RealitĂ€t oder VirtualitĂ€t untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen fĂŒr Navigationsaufgaben zu einer deutlich höheren FĂ€higkeit fĂŒhrt, SehenswĂŒrdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der RealitĂ€t durch Überlagerung von Echtzeitinformationen, die fĂŒr das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, WĂ€rmestrahlung visuell wahrzunehmen. DarĂŒber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollstĂ€ndig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle RealitĂ€ten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die HĂ€nde und die Tastatur, zeigt diese in der vermischen RealitĂ€t an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der TexteingabequalitĂ€t zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man VirtualitĂ€t berĂŒhren kann, indem wir generisches haptisches Feedback fĂŒr virtuelle RealitĂ€ten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das PrĂ€senzgefĂŒhl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als EingabegerĂ€t mit einem sekundĂ€ren physischen Bildschirm verbunden, um die Ein- und AusgabemodalitĂ€ten zu erweitern. Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien fĂŒr die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den ArtikulationsfĂ€higkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben

    The virtual playground: an educational virtual reality environment for evaluating interactivity and conceptual learning

    Get PDF
    The research presented in this paper aims at investigating user interaction in immersive virtual learning environments (VLEs), focusing on the role and the effect of interactivity on conceptual learning. The goal has been to examine if the learning of young users improves through interacting in (i.e. exploring, reacting to, and acting upon) an immersive virtual environment (VE) compared to non interactive or non-immersive environments. Empirical work was carried out with more than 55 primary school students between the ages of 8 and 12, in different between-group experiments: an exploratory study, a pilot study, and a large-scale experiment. The latter was conducted in a virtual environment designed to simulate a playground. In this ‘Virtual Playground’, each participant was asked to complete a set of tasks designed to address arithmetical ‘fractions’ problems. Three different conditions, two experimental virtual reality (VR) conditions and a non-VR condition, that varied the levels of activity and interactivity, were designed to evaluate how children accomplish the various tasks. Pre-tests, post-tests, interviews, video, audio, and log files were collected for each participant, and analyzed both quantitatively and qualitatively. This paper presents a selection of case studies extracted from the qualitative analysis, which illustrate the variety of approaches taken by children in the VEs in response to visual cues and system feedback. Results suggest that the fully interactive VE aided children in problem solving but did not provide as strong evidence of conceptual change as expected; rather, it was the passive VR environment, where activity was guided by a virtual robot, that seemed to support student reflection and recall, leading to indications of conceptual change

    Interactions in Virtual Worlds:Proceedings Twente Workshop on Language Technology 15

    Get PDF
    • 

    corecore