2,290 research outputs found
Acting rehearsal in collaborative multimodal mixed reality environments
This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful
A comparison of immersive realities and interaction methods: cultural learning in virtual heritage
In recent years, Augmented Reality (AR), Virtual Reality (VR), Augmented Virtuality
(AV), and Mixed Reality (MxR) have become popular immersive reality technologies
for cultural knowledge dissemination in Virtual Heritage (VH). These technologies have
been utilized for enriching museums with a personalized visiting experience and digital
content tailored to the historical and cultural context of the museums and heritage
sites. Various interaction methods, such as sensor-based, device-based, tangible,
collaborative, multimodal, and hybrid interaction methods, have also been employed by
these immersive reality technologies to enable interaction with the virtual environments.
However, the utilization of these technologies and interaction methods isn’t often
supported by a guideline that can assist Cultural Heritage Professionals (CHP) to
predetermine their relevance to attain the intended objectives of the VH applications.
In this regard, our paper attempts to compare the existing immersive reality technologies
and interaction methods against their potential to enhance cultural learning in VH
applications. To objectify the comparison, three factors have been borrowed from
existing scholarly arguments in the Cultural Heritage (CH) domain. These factors are the
technology’s or the interaction method’s potential and/or demonstrated capability to: (1)
establish a contextual relationship between users, virtual content, and cultural context, (2)
allow collaboration between users, and (3) enable engagement with the cultural context
in the virtual environments and the virtual environment itself. Following the comparison,
we have also proposed a specific integration of collaborative and multimodal interaction
methods into a Mixed Reality (MxR) scenario that can be applied to VH applications that
aim at enhancing cultural learning in situ
Multimodal teaching, learning and training in virtual reality: a review and case study
It is becoming increasingly prevalent in digital learning research to encompass an array of different meanings, spaces, processes, and teaching strategies for discerning a global perspective on constructing the student learning experience. Multimodality is an emergent phenomenon that may influence how digital learning is designed, especially when employed in highly interactive and immersive learning environments such as Virtual Reality (VR). VR environments may aid students' efforts to be active learners through consciously attending to, and reflecting on, critique leveraging reflexivity and novel meaning-making most likely to lead to a conceptual change. This paper employs eleven industrial case-studies to highlight the application of multimodal VR-based teaching and training as a pedagogically rich strategy that may be designed, mapped and visualized through distinct VR-design elements and features. The outcomes of the use cases contribute to discern in-VR multimodal teaching as an emerging discourse that couples system design-based paradigms with embodied, situated and reflective praxis in spatial, emotional and temporal VR learning environments
Human-centric quality management of immersive multimedia applications
Augmented Reality (AR) and Virtual Reality (VR) multimodal systems are the latest trend within the field of multimedia. As they emulate the senses by means of omni-directional visuals, 360 degrees sound, motion tracking and touch simulation, they are able to create a strong feeling of presence and interaction with the virtual environment. These experiences can be applied for virtual training (Industry 4.0), tele-surgery (healthcare) or remote learning (education). However, given the strong time and task sensitiveness of these applications, it is of great importance to sustain the end-user quality, i.e. the Quality-of-Experience (QoE), at all times. Lack of synchronization and quality degradation need to be reduced to a minimum to avoid feelings of cybersickness or loss of immersiveness and concentration. This means that there is a need to shift the quality management from system-centered performance metrics towards a more human, QoE-centered approach. However, this requires for novel techniques in the three areas of the QoE-management loop (monitoring, modelling and control). This position paper identifies open areas of research to fully enable human-centric driven management of immersive multimedia. To this extent, four main dimensions are put forward: (1) Task and well-being driven subjective assessment; (2) Real-time QoE modelling; (3) Accurate viewport prediction; (4) Machine Learning (ML)-based quality optimization and content recreation. This paper discusses the state-of-the-art, and provides with possible solutions to tackle the open challenges
PhysioVR: a novel mobile virtual reality framework for physiological computing
Virtual Reality (VR) is morphing into a ubiquitous
technology by leveraging of smartphones and screenless cases in
order to provide highly immersive experiences at a low price
point. The result of this shift in paradigm is now known as mobile
VR (mVR). Although mVR offers numerous advantages over
conventional immersive VR methods, one of the biggest
limitations is related with the interaction pathways available for
the mVR experiences. Using physiological computing principles,
we created the PhysioVR framework, an Open-Source software
tool developed to facilitate the integration of physiological signals
measured through wearable devices in mVR applications.
PhysioVR includes heart rate (HR) signals from Android
wearables, electroencephalography (EEG) signals from a low cost brain computer interface and electromyography (EMG)
signals from a wireless armband. The physiological sensors are
connected with a smartphone via Bluetooth and the PhysioVR
facilitates the streaming of the data using UDP communication
protocol, thus allowing a multicast transmission for a third party
application such as the Unity3D game engine. Furthermore, the
framework provides a bidirectional communication with the VR
content allowing an external event triggering using a real-time
control as well as data recording options. We developed a demo
game project called EmoCat Rescue which encourage players to
modulate HR levels in order to successfully complete the in-game
mission. EmoCat Rescue is included in the PhysioVR project
which can be freely downloaded. This framework simplifies the
acquisition, streaming and recording of multiple physiological
signals and parameters from wearable consumer devices
providing a single and efficient interface to create novel
physiologically-responsive mVR applications.info:eu-repo/semantics/publishedVersio
Collaborative and Multi-Modal Mixed Reality for Enhancing Cultural Learning in Virtual Heritage
No abstrac
XR, music and neurodiversity: design and application of new mixed reality technologies that facilitate musical intervention for children with autism spectrum conditions
This thesis, accompanied by the practice outputs,investigates sensory integration, social interaction and creativity through a newly developed VR-musical interface designed exclusively for children with a high-functioning autism spectrum condition (ASC).The results aim to contribute to the limited expanse of literature and research surrounding Virtual Reality (VR) musical interventions and Immersive Virtual Environments (IVEs) designed to support individuals with neurodevelopmental conditions.
The author has developed bespoke hardware, software and a new methodology to conduct field investigations. These outputs include a Virtual Immersive Musical Reality Intervention (ViMRI) protocol, a Supplemental Personalised, immersive Musical Experience(SPiME) programme, the Assisted Real-time Three-dimensional Immersive Musical Intervention System’ (ARTIMIS) and a bespoke (and fully configurable) ‘Creative immersive interactive Musical Software’ application (CiiMS).
The outputs are each implemented within a series of institutional investigations of 18 autistic child participants. Four groups are evaluated using newly developed virtual assessment and scoring mechanisms devised exclusively from long-established rating scales. Key quantitative indicators from the datasets demonstrate consistent findings and significant improvements for individual preferences (likes), fear reduction efficacy, and social interaction.
Six individual case studies present positive qualitative results demonstrating improved decision-making and sensorimotor processing. The preliminary research trials further indicate that using this virtual-reality music technology system and newly developed protocols produces notable improvements for participants with an ASC. More significantly, there is evidence that the supplemental technology facilitates a reduction in psychological anxiety and improvements in dexterity. The virtual music composition and improvisation system presented here require further extensive testing in different spheres for proof of concept
- …