593 research outputs found

    The Effects of Instructor-Avatar Immediacy in Second Life, an Immersive and Interactive 3D Virtual Environment

    Get PDF
    Growing interest of educational institutions in desktop 3D graphic virtual environments for hybrid and distance education prompts questions on the efficacy of such tools. Virtual worlds, such as Second Life®, enable computer-mediated immersion and interactions encompassing multimodal communication channels including audio, video, and text-. These are enriched by avatar-mediated body language and physical manipulation of the environment. In this para-physical world, instructors and students alike employ avatars to establish their social presence in a wide variety of curricular and extra-curricular contexts. As a proxy for the human body in synthetic 3D environments, an avatar represents a \u27real\u27 human computer user and incorporates default behavior patterns (e.g., autonomous gestures such as changes in body orientation or movement of hands) as well as expressive movements directly controlled by the user through keyboard \u27shortcuts.\u27 Use of headset microphones and various stereophonic effects allows users to project their speech directly from the apparent location of their avatar. In addition, personalized information displays allow users to share graphical information, including text messages and hypertext links. These \u27channels\u27 of information constituted an integrated and dynamic framework for projecting avatar \u27immediacy\u27 behaviors (including gestures, intonation, and patterns of interaction with students), that may positively or negatively affect the degree to which other observers of the virtual world perceive the user represented by the avatar as \u27socially present\u27 in the virtual world. This study contributes to the nascent research on educational implementations of Second Life in higher education. Although education researchers have investigated the impact of instructor immediacy behaviors on student perception of instructor social presence, students\u27 satisfaction, motivation, and learning, few researchers have examined the effects of immediacy behaviors in a 3D virtual environment or the effects of immediacy behaviors manifested by avatars representing instructors. The study employed a two-factor experimental design to investigate the relationship between instructor avatars\u27 immediacy behaviors (high vs. low) and students\u27 perception of instructor immediacy, instructor social presence, student avatars co-presence and learning outcomes in Second Life. The study replicates and extends aspects of an earlier study conducted by Maria Schutt, Brock S. Allen, and Mark Laumakis, including components of the experimental treatments that manipulated the frequency of various types of immediacy behaviors identified by other researchers as potentially related to perception of social presence in face-to-face and mediated instruction. Participants were 281 students enrolled in an introductory psychology course at San Diego State University who were randomly assigned to one of four groups. Each group viewed a different version of the 28-minute teaching session in Second Life on current perspective in psychology. Data were gathered from student survey responses and tests on the lesson content. Analysis of variance revealed significant differences between the treatment groups (F (3,113) = 6.5,p = .000). Students who viewed the high immediacy machinimas (Group 1 HiHi and Group 2 HiLo) rated the immediacy behaviors of the instructor-avatar more highly than those who viewed the low-immediacy machinimas (Group 3 LoHi and Group 4 LoLo). Findings also demonstrate strong correlations between students\u27 perception of instructor avatar immediacy and instructor social presence (r = .769). These outcomes in the context of a 3D virtual world are consistent with findings on instructor immediacy and social presence literature in traditional and online classes. Results relative to learning showed that all groups tested higher after viewing the treatment, with no significant differences between groups. Recommendations for current and future practice of using instructor-avatars include paralanguage behaviors such as voice quality, emotion and prosodic features and nonverbal behaviors such as proxemics and gestures, facial expression, lip synchronization and eye contact

    Binaural Spatialization for 3D immersive audio communication in a virtual world

    Get PDF
    Realistic 3D audio can greatly enhance the sense of presence in a virtual environment. We introduce a framework for capturing, transmitting and rendering of 3D audio in presence of other bandwidth savvy streams in a 3D Tele-immersion based virtual environment. This framework presents an efficient implementation for 3D Binaural Spatialization based on the positions of current objects in the scene, including animated avatars and on the fly reconstructed humans. We present a general overview of the framework, how audio is integrated in the system and how it can exploit the positions of the objects and room geometry to render realistic reverberations using head related transfer functions. The network streaming modules used to achieve lip-synchronization, high-quality audio frame reception, and accurate localization for binaural rendering are also presented. We highlight how large computational and networking challenges can be addressed efficiently. This represents a first step in adequate networking support for Binaural 3D Audio, useful for telepresence. The subsystem is successfully integrated with a larger 3D immersive system, with state of art capturing and rendering modules for visual data

    Attention and Social Cognition in Virtual Reality:The effect of engagement mode and character eye-gaze

    Get PDF
    Technical developments in virtual humans are manifest in modern character design. Specifically, eye gaze offers a significant aspect of such design. There is need to consider the contribution of participant control of engagement. In the current study, we manipulated participants’ engagement with an interactive virtual reality narrative called Coffee without Words. Participants sat over coffee opposite a character in a virtual café, where they waited for their bus to be repaired. We manipulated character eye-contact with the participant. For half the participants in each condition, the character made no eye-contact for the duration of the story. For the other half, the character responded to participant eye-gaze by making and holding eye contact in return. To explore how participant engagement interacted with this manipulation, half the participants in each condition were instructed to appraise their experience as an artefact (i.e., drawing attention to technical features), while the other half were introduced to the fictional character, the narrative, and the setting as though they were real. This study allowed us to explore the contributions of character features (interactivity through eye-gaze) and cognition (attention/engagement) to the participants’ perception of realism, feelings of presence, time duration, and the extent to which they engaged with the character and represented their mental states (Theory of Mind). Importantly it does so using a highly controlled yet ecologically valid virtual experience

    Examining the role of smart TVs and VR HMDs in synchronous at-a-distance media consumption

    Get PDF
    This article examines synchronous at-a-distance media consumption from two perspectives: How it can be facilitated using existing consumer displays (through TVs combined with smartphones), and imminently available consumer displays (through virtual reality (VR) HMDs combined with RGBD sensing). First, we discuss results from an initial evaluation of a synchronous shared at-a-distance smart TV system, CastAway. Through week-long in-home deployments with five couples, we gain formative insights into the adoption and usage of at-a-distance media consumption and how couples communicated during said consumption. We then examine how the imminent availability and potential adoption of consumer VR HMDs could affect preferences toward how synchronous at-a-distance media consumption is conducted, in a laboratory study of 12 pairs, by enhancing media immersion and supporting embodied telepresence for communication. Finally, we discuss the implications these studies have for the near-future of consumer synchronous at-a-distance media consumption. When combined, these studies begin to explore a design space regarding the varying ways in which at-a-distance media consumption can be supported and experienced (through music, TV content, augmenting existing TV content for immersion, and immersive VR content), what factors might influence usage and adoption and the implications for supporting communication and telepresence during media consumption

    Multi-party holomeetings: toward a new era of low-cost volumetric holographic meetings in virtual reality

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Fueled by advances in multi-party communications, increasingly mature immersive technologies being adopted, and the COVID-19 pandemic, a new wave of social virtual reality (VR) platforms have emerged to support socialization, interaction, and collaboration among multiple remote users who are integrated into shared virtual environments. Social VR aims to increase levels of (co-)presence and interaction quality by overcoming the limitations of 2D windowed representations in traditional multi-party video conferencing tools, although most existing solutions rely on 3D avatars to represent users. This article presents a social VR platform that supports real-time volumetric holographic representations of users that are based on point clouds captured by off-the-shelf RGB-D sensors, and it analyzes the platform’s potential for conducting interactive holomeetings (i.e., holoconferencing scenarios). This work evaluates such a platform’s performance and readiness for conducting meetings with up to four users, and it provides insights into aspects of the user experience when using single-camera and low-cost capture systems in scenarios with both frontal and side viewpoints. Overall, the obtained results confirm the platform’s maturity and the potential of holographic communications for conducting interactive multi-party meetings, even when using low-cost systems and single-camera capture systems in scenarios where users are sitting or have a limited translational movement along the X, Y, and Z axes within the 3D virtual environment (commonly known as 3 Degrees of Freedom plus, 3DoF+).The authors would like to thank the members of the EU H2020 VR-Together consortium for their valuable contributions, especially Marc Martos and Mohamad Hjeij for their support in developing and evaluating tasks. This work has been partially funded by: the EU’s Horizon 2020 program, under agreement nº 762111 (VR-Together project); by ACCIÓ (Generalitat de Catalunya), under agreement COMRDI18-1-0008 (ViVIM project); and by Cisco Research and the Silicon Valley Community Foundation, under the grant Extended Reality Multipoint Control Unit (ID: 1779376). The work by Mario Montagud has been additionally funded by Spain’s Agencia Estatal de Investigación under grant RYC2020-030679-I (AEI / 10.13039/501100011033) and by Fondo Social Europeo. The work of David Rincón was supported by Spain’s Agencia Estatal de Investigación within the Ministerio de Ciencia e Innovación under Project PID2019-108713RB-C51 MCIN/AEI/10.13039/501100011033.Peer ReviewedPostprint (published version

    Collaborative learning in multi-user virtual environments

    Get PDF
    Multi-user virtual environments (MUVEs) have captured the attention and interest of educators as remote collaborative learning environments due to their immersion, interaction and communication capabilities. However, productive learning interactions cannot be considered a given and careful consideration of the design of learning activities and organizational support must be provided to foster collaboration. In this paper, a model to support collaborative learning in MUVEs is presented. This model enables the scaffolding of learning workflows and organizes collaborative learning activities by regulating interactions. Software architecture is developed to support the model, and to deploy and enact collaborative learning modules. A user-centered design has been followed to identify successful strategies for modeling collaborative learning activities in a case study. The results show how interactions with elements of 3D virtual worlds can enforce collaboration in MUVEs.Publicad

    3D Virtual Worlds and the Metaverse: Current Status and Future Possibilities

    Get PDF
    Moving from a set of independent virtual worlds to an integrated network of 3D virtual worlds or Metaverse rests on progress in four areas: immersive realism, ubiquity of access and identity, interoperability, and scalability. For each area, the current status and needed developments in order to achieve a functional Metaverse are described. Factors that support the formation of a viable Metaverse, such as institutional and popular interest and ongoing improvements in hardware performance, and factors that constrain the achievement of this goal, including limits in computational methods and unrealized collaboration among virtual world stakeholders and developers, are also considered
    • …
    corecore