4,176 research outputs found

    A method for autonomous positioning avatars in a group

    Get PDF
    In this paper, we describe a method to position a group of avatars in a virtual environment. The method aims at a group setting that seems natural for a group of people attending a guided tour and was developed in particular to assist participants by autonomously positioning their avatars on each stop of a virtual tour. The geometry of the virtual environment is key input, but also engagement of participants and possible social networks are taken into account. Consequently, it may serve to position avatars in similar type of situations

    Navigation in REVERIE's Virtual Environments

    Get PDF
    This work presents a novel navigation system for social collaborative virtual environments populated with multiple characters. The navigation system ensures collision free movement of avatars and agents. It supports direct user manipulation, automated path planning, positioning to get seated, and follow-me behaviour for groups. In follow-me mode, the socially aware system manages the mise en place of individuals within a group. A use case centred around on an educational virtual trip to the European Parliament created for the REVERIE FP7 project, also serves as an example to bring forward aspects of such navigational requirements

    User experience evaluation of human representation in collaborative virtual environments

    Get PDF
    Human embodiment/representation in virtual environments (VEs) similarly to the human body in real life is endowed with multimodal input/output capabilities that convey multiform messages enabling communication, interaction and collaboration in VEs. This paper assesses how effectively different types of virtual human (VH) artefacts enable smooth communication and interaction in VEs. With special focus on the REal and Virtual Engagement In Realistic Immersive Environments (REVERIE) multi-modal immersive system prototype, a research project funded by the European Commission Seventh Framework Programme (FP7/2007-2013), the paper evaluates the effectiveness of REVERIE VH representation on the foregoing issues based on two specifically designed use cases and through the lens of a set of design guidelines generated by previous extensive empirical user-centred research. The impact of REVERIE VH representations on the quality of user experience (UX) is evaluated through field trials. The output of the current study proposes directions for improving human representation in collaborative virtual environments (CVEs) as an extrapolation of lessons learned by the evaluation of REVERIE VH representation

    Autonomous agents and avatars in REVERIE’s virtual environment

    Get PDF
    In this paper, we describe the enactment of autonomous agents and avatars in the web-based social collaborative virtual environment of REVERIE that supports natural, human-like behavior, physical interaction and engagement. Represented by avatars, users feel immersed in this virtual world in which they can meet and share experiences as in real life. Like the avatars, autonomous agents that may act in this world are capable of demonstrating human-like non-verbal behavior and facilitate social interaction. We describe how reasoning components of the REVERIE system connect and cooperatively control autonomous agents and avatars representing a user

    A study of verbal and nonverbal communication in Second Life - the ARCHI21 experience

    Get PDF
    To appear in 2013. This is not the final version.Three-dimensional synthetic worlds introduce possibilities for nonverbal communication in computer-mediated language learning. This paper presents an original methodological framework for the study of multimodal communication in such worlds. It offers a classification of verbal and nonverbal communication acts in the synthetic world Second Life and outlines relationships between the different types of acts that are built into the environment. The paper highlights some of the differences between the synthetic world's communication modes and those of face-to-face communication and exemplifies the interest of these for communication within a pedagogical context. We report on the application of the methodological framework to a course in Second Life which formed part of the European project ARCHI21. This course, for Architecture students, adopted a Content and Learning Integrated Learning approach (CLIL). The languages studied were French and English. A collaborative building activity in the students L2 is considered, using a method designed to organise the data collected in screen recordings and to code and transcribe the multimodal acts. We explore whether nonverbal communication acts are autonomous in Second Life or whether interaction between synchronous verbal and nonverbal communication exists. Our study describes how the distribution of the verbal and nonverbal modes varied depending on the pre-defined role the student undertook during the activity. We also describe the use of nonverbal communication to overcome verbal miscommunication where direction and orientation were concerned. In addition, we illustrate how nonverbal acts were used to secure the context for deictic references to objects made in the verbal mode. Finally, we discuss the importance of nonverbal and verbal communication modes in the proxemic organisation of students and the impact of proxemic organisation on the quantity of students' verbal production and the topics discussed in this mode. This paper seeks to contribute to some of the methodological reflections needed to better understand the affordances of synthetic worlds, including the verbal and nonverbal communication opportunities Second Life offers, how students use these and their impact on the interaction concerning the task given to students

    Synthesizing mood-affected signed messages: Modifications to the parametric synthesis

    Full text link
    This is the author’s version of a work that was accepted for publication in International Journal of Human-Computer Studies. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in International Journal of Human-Computer Studies,70, 4 (2012) DOI: 10.1016/j.ijhcs.2011.11.003This paper describes the first approach in synthesizing mood-affected signed contents. The research focuses on the modifications applied to a parametric sign language synthesizer (based on phonetic descriptions of the signs). We propose some modifications that will allow for the synthesis of different perceived frames of mind within synthetic signed messages. Three of these proposals focus on modifications to three different signs' phonologic parameters (the hand shape, the movement and the non-hand parameter). The other two proposals focus on the temporal aspect of the synthesis (sign speed and transition duration) and the representation of muscular tension through inverse kinematics procedures. These resulting variations have been evaluated by Spanish deaf signers, who have concluded that our system can generate the same signed message with three different frames of mind, which are correctly identified by Spanish Sign Language signers
    corecore