912 research outputs found

    Collaboration in 3D virtual worlds: designing a protocol for case study research

    Get PDF
    Three-dimensional virtual worlds (3DVW) have been growing fast in number of users, and are used for the most diverse purposes. In collaboration, 3DVW are used with good results due to features such as immersion, interaction capabilities, use of avatar embodiment, and physical space. In the particular cases of avatar embodiment and physical space, these features support nonverbal communication, but its impact on collaboration is not well known. In this work we present the initial steps for creation of a protocol for case study research, aiming to assert itself as a tool to collect data on how nonverbal communication influences collaboration in 3DVW. We define the propositions and units of analysis, and a pilot case to validate them

    Social Interactions in Immersive Virtual Environments: People, Agents, and Avatars

    Get PDF
    Immersive virtual environments (IVEs) have received increased popularity with applications in many fields. IVEs aim to approximate real environments, and to make users react similarly to how they would in everyday life. An important use case is the users-virtual characters (VCs) interaction. We interact with other people every day, hence we expect others to appropriately act and behave, verbally and non-verbally (i.e., pitch, proximity, gaze, turn-taking). These expectations also apply to interactions with VCs in IVEs, and this thesis tackles some of these aspects. We present three projects that inform the area of social interactions with a VC in IVEs, focusing on non-verbal behaviours. In our first study on interactions between people, we collaborated with the Social Neuroscience group at the Institute of Cognitive Neuroscience from UCL on a dyad multi-modal interaction. This aims to understand the conversation dynamics, focusing on gaze and turn-taking. The results show that people have a higher frequency of gaze change (from averted to direct and vice versa) when they are being looked at compared to when they are not. When they are not being looked at, they are also directing their gaze to their partners more compared to when they are being looked at. Another contribution of this work is the automated method of annotating speech and gaze data. Next, we consider agents’ higher-level non-verbal behaviours, covering social attitudes. We present a pipeline to collect data and train a machine learning (ML) model that detects social attitudes in a user-VC interaction. Here we collaborated with two game studios: Dream Reality Interaction and Maze Theory. We present a case study for the ML pipeline on social engagement recognition for the Peaky Blinders narrative VR game from Maze Theory studio. We use a reinforcement learning algorithm with imitation learning rewards and a temporal memory element. The results show that the model trained with raw data does not generalise and performs worse (60% accuracy) than the one trained with socially meaningful data (83% accuracy). In IVEs, people embody avatars and their appearance can impact social interactions. In collaboration with Microsoft Research, we report a longitudinal study in mixed-reality on avatar appearance in real-work meetings between co-workers comparing personalised full-body realistic and cartoon avatars. The results imply that when participants use realistic avatars first, they may have higher expectations and they perceive their colleagues’ emotional states with less accuracy. Participants may also become more accustomed to cartoon avatars as time passes and the overall use of avatars may lead to less accurately perceiving negative emotions. The work presented here contributes towards the field of detecting and generating nonverbal cues for VCs in IVEs. These are also important building blocks for creating autonomous agents for IVEs. Additionally, this work contributes to the games and work industry fields through an immersive ML pipeline for detecting social attitudes and through insights into using different avatar styles over time in real-world meetings

    Life-Sized Audiovisual Spatial Social Scenes with Multiple Characters: MARC & SMART-IÂČ

    No full text
    International audienceWith the increasing use of virtual characters in virtual and mixed reality settings, the coordination of realism in audiovisual rendering and expressive virtual characters becomes a key issue. In this paper we introduce a new system combining two systems for tackling the issue of realism and high quality in audiovisual rendering and life-sized expressive characters. The goal of the resulting SMART-MARC platform is to investigate the impact of realism on multiple levels: spatial audiovisual rendering of a scene, appearance and expressive behaviors of virtual characters. Potential interactive applications include mediated communication in virtual worlds, therapy, game, arts and elearning. Future experimental studies will focus on 3D audio/visual coherence, social perception and ecologically valid interaction scenes

    Virtual Reality for Teacher Training : An Experiential Approach to Classroom Conflict Management

    Get PDF
    This chapter discusses the use of virtual reality (VR) in the training of preservice secondary education teachers in Spain as an integral part of their learning process. The authors propose some premises from which to design a training program to improve preservice teachers' communicative competence and their ability to manage conflict impacting the classroom climate. First, it explains the experiential and experimental potential of a virtual learning environment (VLE), its ability to create personalized virtual worlds, as well as the possibility to generate insightful instant feedback and feedforward. Finally, an example of a prototype scenario designed on this conceptual basis is provided. Furthermore, the chapter presents an overview of an educational proposal to implement this experiential immersive opportunity for preservice teachers to interact and manage disruptive situations in a safe and reliable environment conducive to the development of key communicative competences and strategies to turn conflict into a learning opportunity

    Revisiting Milgram’s cyranoid method: experimenting with hybrid human agents

    Get PDF
    In two studies based on Stanley Milgram’s original pilots, we present the first systematic examination of cyranoids as social psychological research tools. A cyranoid is created by cooperatively joining in real-time the body of one person with speech generated by another via covert speech shadowing. The resulting hybrid persona can subsequently interact with third parties face-to-face. We show that naïve interlocutors perceive a cyranoid to be a unified, autonomously communicating person, evidence for a phenomenon Milgram termed the “cyranic illusion.” We also show that creating cyranoids composed of contrasting identities (a child speaking adult-generated words and vice versa) can be used to study how stereotyping and person perception are mediated by inner (dispositional) vs. outer (physical) identity. Our results establish the cyranoid method as a unique means of obtaining experimental control over inner and outer identities within social interactions rich in mundane realism

    Interactive Virtual Training: Implementation for Early Career Teachers to Practice Classroom Behavior Management

    Get PDF
    Teachers that are equipped with the skills to manage and prevent disruptive behaviors increase the potential for their students to achieve academically and socially. Student success increases when prevention strategies and effective classroom behavior management (CBM) are implemented in the classroom. However, teachers with less than 5 years of experience, early career teachers (ECTs), are ill equipped to handle disruptive students. ECTs describe disruptive behaviors as a major factor for stress given their limited training in CBM. As a result, disruptive behaviors are reported by ECTs as one of the main reasons for leaving the field. Virtual training environments (VTEs) combined with advances in virtual social agents can support the training of CBM. Although VTEs for teachers already exist, requirements to guide future research and development of similar training systems have not been defined. We propose a set of six requirements for VTEs for teachers. Our requirements were established from a survey of the literature and from iterative lifecycle activities to build our own VTE for teachers. We present different evaluations of our VTE using methodologies and metrics we developed to assess whether all requirements were met. Our VTE simulates interactions with virtual animated students based on real classroom situations to help ECTs practice their CBM. We enhanced our classroom simulator to further explore two aspects of our requirements: interaction devices and emotional virtual agents. Interactions devices were explored by comparing the effect of immersive technologies on users\u27 experience (UX) such as presence, co-presence, engagement and believability. We adapted our VTE originally built for desktop computer, to be compatible with two immersive VR platforms. Results show that our VTE generates high levels of UX across all VR platforms. Furthermore, we enhanced our virtual students to display emotions using facial expressions as current studies do not address whether emotional virtual agents provide the same level of UX across different VR platforms. We assessed the effects of VR platforms and display of emotions on UX. Our analysis shows that facial expressions have greater impact when using a desktop computer. We propose future work on immersive VTEs using emotional virtual agents

    THREE ESSAYS ON BEHAVIORAL ADAPTABILITY IN THE LEADERSHIP CONTEXT

    Get PDF

    Creating Patient Context: Empathy and Attitudes Toward Diabetes Following Virtual Immersion.

    Get PDF
    BACKGROUND: Pandemic circumstances created challenges for doctor of physical therapy (DPT) students to understand social determinants of health (SDH) in clinical rotations. Instead of canceling clinical rotations, a virtual reality cinema (cine-VR) education series was implemented. The purpose of this project is to describe the effect of this simulated immersion on student empathy and attitudes toward diabetes. METHOD: The DPT students (n=59) participated in 12 cine-VR education modules, completing surveys at three time points as part of coursework. The students completed baseline measures of the Diabetes Attitude Scale-Version 3 (DAS-3) and Jefferson Empathy Scale (JES), and then were immersed in 12 cine-VR modules. One week after module completion, students participated in a class discussion about the modules. The students repeated the JES and DAS-3 scales at postclass and six weeks later. Three subscales from the Presence Questionnaire (PQ) were used to measure the virtual experience. RESULTS: Student scores on three DAS-3 subscales significantly improved on posttest: Attitude toward patient autonomy, Mean: 0.75, SD: 0.45; DISCUSSION: These modules can allow for a shared student experience that improves diabetes attitudes, increases empathy, and fosters meaningful classroom discussion. The cine-VR experience is flexible, and modules allow students to engage in aspects of a patient\u27s life that were not available otherwise

    Development and evaluation of an interactive virtual audience for a public speaking training application

    Get PDF
    Einleitung: Eine der hĂ€ufigsten sozialen Ängste ist die Angst vor öffentlichem Sprechen. Virtual-Reality- (VR-) Trainingsanwendungen sind ein vielversprechendes Instrument, um die Sprechangst zu reduzieren und die individuellen SprachfĂ€higkeiten zu verbessern. Grundvoraussetzung hierfĂŒr ist die Implementierung eines realistischen und interaktiven Sprecher-Publikum-Verhaltens. Ziel: Die Studie zielte darauf ab, ein realistisches und interaktives Publikum fĂŒr eine VR-Anwendung zu entwickeln und zu bewerten, welches fĂŒr die Trainingsanwendung von öffentlichem Sprechen angewendet wird. ZunĂ€chst wurde eine Beobachtungsstudie zu den Verhaltensmustern von Sprecher und Publikum durchgefĂŒhrt. Anschließend wurden die identifizierten Muster in eine VR-Anwendung implementiert. Die Wahrnehmung der implementierten Interaktionsmuster wurde in einer weiteren Studie aus Sicht der Nutzer evaluiert. Beobachtungsstudie (1): Aufgrund der nicht ausreichenden Datengrundlage zum realen interaktiven Verhalten zwischen Sprecher und Publikum lautet die erste Forschungsfrage "Welche Sprecher-Publikums-Interaktionsmuster können im realen Umfeld identifiziert werden?". Es wurde eine strukturierte, nicht teilnehmende, offene Beobachtungsstudie durchgefĂŒhrt. Ein reales Publikum wurde auf Video aufgezeichnet und die Inhalte analysiert. Die Stichprobe ergab N = 6484 beobachtete Interaktionsmuster. Es wurde festgestellt, dass Sprecher mehr Dialoge als das Publikum initiieren und wie die Zuschauer auf GesichtsausdrĂŒcke und Gesten der Sprecher reagieren. Implementierungsstudie (2): Um effiziente Wege zur Implementierung der Ergebnisse der Beobachtungsstudie in die Trainingsanwendung zu finden, wurde die Forschungsfrage wie folgt formuliert: "Wie können Interaktionsmuster zwischen Sprecher und Publikum in eine virtuelle Anwendung implementiert werden?". Das Hardware-Setup bestand aus einer CAVE, Infitec-Brille und einem ART Head-Tracking. Die Software wurde mit 3D-Excite RTT DeltaGen 12.2 realisiert. Zur Beantwortung der zweiten Forschungsfrage wurden mehrere mögliche technische Lösungen systematisch untersucht, bis effiziente Lösungen gefunden wurden. Infolgedessen wurden die selbst erstellte Audioerkennung, die Kinect-Bewegungserkennung, die Affectiva-Gesichtserkennung und die selbst erstellten Fragen implementiert, um das interaktive Verhalten des Publikums in der Trainingsanwendung fĂŒr öffentliches Sprechen zu realisieren. Evaluationsstudie (3): Um herauszufinden, ob die Implementierung interaktiver Verhaltensmuster den Erwartungen der Benutzer entsprach, wurde die dritte Forschungsfrage folgendermaßen formuliert: “Wie beeinflusst die InteraktivitĂ€t einer virtuellen Anwendung fĂŒr öffentliches Reden die Benutzererfahrung?”. Eine experimentelle Benutzer-Querschnittsstudie wurde mit N = 57 Teilnehmerinnen (65% MĂ€nner, 35% Frauen; Durchschnittsalter = 25.98, SD = 4.68) durchgefĂŒhrt, die entweder der interaktiven oder nicht-interaktiven VR-Anwendung zugewiesen wurden. Die Ergebnisse zeigten, dass, es einen signifikanten Unterschied in der Wahrnehmung zwischen den beiden Anwendungen gab. Allgemeine Schlussfolgerungen: Interaktionsmuster zwischen Sprecher und Publikum, die im wirklichen Leben beobachtet werden können, wurden in eine VR-Anwendung integriert, die Menschen dabei hilft, Angst vor dem öffentlichen Sprechen zu ĂŒberwinden und ihre öffentlichen SprechfĂ€higkeiten zu trainieren. Die Ergebnisse zeigten eine hohe Relevanz der VR-Anwendungen fĂŒr die Simulation öffentlichen Sprechens. Obwohl die Fragen des Publikums manuell gesteuert wurden, konnte das neu gestaltete Publikum mit den Versuchspersonen interagieren. Die vorgestellte VR-Anwendung zeigt daher einen hohen potenziellen Nutzen, Menschen beim Trainieren von SprechfĂ€higkeiten zu unterstĂŒtzen. Die Fragen des Publikums wurden immer noch manuell von einem Bediener reguliert und die Studie wurde mit Teilnehmern durchgefĂŒhrt, die nicht unter einem hohen Grad an Angst vor öffentlichem Sprechen leiden. Bei zukĂŒnftigen Studien sollten fortschrittlichere Technologien eingesetzt werden, beispielsweise Spracherkennung, 3D-Aufzeichnungen oder 3D-Livestreams einer realen Person und auch Teilnehmer mit einem hohen Grad an Angst vor öffentlichen Ansprachen beziehungsweise Sprechen in der Öffentlichkeit.Introduction: Fear of public speaking is the most common social fear. Virtual reality (VR) training applications are a promising tool to improve public speaking skills. To be successful, applications should feature a high scenario fidelity. One way to improve it is to implement realistic speaker-audience interactive behavior. Objective: The study aimed to develop and evaluate a realistic and interactive audience for a VR public speaking training application. First, an observation study on real speaker-audience interactive behavior patterns was conducted. Second, identified patterns were implemented in the VR application. Finally, an evaluation study identified users’ perceptions of the training application. Observation Study (1): Because of the lack of data on real speaker-audience interactive behavior, the first research question to be answered was “What speaker-audience interaction patterns can be identified in real life?”. A structured, non-participant, overt observation study was conducted. A real audience was video recorded, and content analyzed. The sample resulted in N = 6,484 observed interaction patterns. It was found that speakers, more often than audience members, initiate dialogues and how audience members react to speakers’ facial expressions and gestures. Implementation Study (2): To find efficient ways of implementing the results of the observation study in the training application, the second research question was formulated as: “How can speaker-audience interaction patterns be implemented into the virtual public speaking application?”. The hardware setup comprised a CAVE, Infitec glasses, and ART head tracking. The software was realized with 3D-Excite RTT DeltaGen 12.2. To answer the second research question, several possible technical solutions were explored systematically, until efficient solutions were found. As a result, self-created audio recognition, Kinect motion recognition, Affectiva facial recognition, and manual question generation were implemented to provide interactive audience behavior in the public speaking training application. Evaluation Study (3): To find out if implementing interactive behavior patterns met users’ expectations, the third research question was formulated as “How does interactivity of a virtual public speaking application affect user experience?”. An experimental, cross-sectional user study was conducted with (N = 57) participants (65% men, 35% women; Mage = 25.98, SD = 4.68) who used either an interactive or a non-interactive VR application condition. Results revealed that there was a significant difference in users’ perception of the two conditions. General Conclusions: Speaker-audience interaction patterns that can be observed in real life were incorporated into a VR application that helps people to overcome the fear of public speaking and train their public speaking skills. The findings showed a high relevance of interactivity for VR public speaking applications. Although questions from the audience were still regulated manually, the newly designed audience could interact with the speakers. Thus, the presented VR application is of potential value in helping people to train their public speaking skills. The questions from the audience were still regulated manually by an operator and we conducted the study with participants not suffering from high degrees of public speaking fear. Future work may use more advanced technology, such as speech recognition, 3D-records, or live 3D-streams of an actual person and include participants with high degrees of public speaking fear
    • 

    corecore