36,144 research outputs found

    User experience evaluation of human representation in collaborative virtual environments

    Get PDF
    Human embodiment/representation in virtual environments (VEs) similarly to the human body in real life is endowed with multimodal input/output capabilities that convey multiform messages enabling communication, interaction and collaboration in VEs. This paper assesses how effectively different types of virtual human (VH) artefacts enable smooth communication and interaction in VEs. With special focus on the REal and Virtual Engagement In Realistic Immersive Environments (REVERIE) multi-modal immersive system prototype, a research project funded by the European Commission Seventh Framework Programme (FP7/2007-2013), the paper evaluates the effectiveness of REVERIE VH representation on the foregoing issues based on two specifically designed use cases and through the lens of a set of design guidelines generated by previous extensive empirical user-centred research. The impact of REVERIE VH representations on the quality of user experience (UX) is evaluated through field trials. The output of the current study proposes directions for improving human representation in collaborative virtual environments (CVEs) as an extrapolation of lessons learned by the evaluation of REVERIE VH representation

    Evaluating the impact of multimodal Collaborative Virtual Environments on user’s spatial knowledge and experience of gamified educational tasks

    Get PDF
    Several research projects in spatial cognition have suggested Virtual Environments (VEs) as an effective way of facilitating mental map development of a physical space. In the study reported in this paper, we evaluated the effectiveness of multimodal real-time interaction in distilling understanding of the VE after completing gamified educational tasks. We also measure the impact of these design elements on the user’s experience of educational tasks. The VE used reassembles an art gallery and it was built using REVERIE (Real and Virtual Engagement In Realistic Immersive Environment) a framework designed to enable multimodal communication on the Web. We compared the impact of REVERIE VG with an educational platform called Edu-Simulation for the same gamified educational tasks. We found that the multimodal VE had no impact on the ability of students to retain a mental model of the virtual space. However, we also found that students thought that it was easier to build a mental map of the virtual space in REVERIE VG. This means that using a multimodal CVE in a gamified educational experience does not benefit spatial performance, but also it does not cause distraction. The paper ends with future work and conclusions and suggestions for improving mental map construction and user experience in multimodal CVEs

    Evaluating the impact of multimodal Collaborative Virtual Environments on user’s spatial knowledge and experience of gamified educational tasks

    Get PDF
    Several research projects in spatial cognition have suggested Virtual Environments (VEs) as an effective way of facilitating mental map development of a physical space. In the study reported in this paper, we evaluated the effectiveness of multimodal real-time interaction in distilling understanding of the VE after completing gamified educational tasks. We also measure the impact of these design elements on the user’s experience of educational tasks. The VE used reassembles an art gallery and it was built using REVERIE (Real and Virtual Engagement In Realistic Immersive Environment) a framework designed to enable multimodal communication on the Web. We compared the impact of REVERIE VG with an educational platform called Edu-Simulation for the same gamified educational tasks. We found that the multimodal VE had no impact on the ability of students to retain a mental model of the virtual space. However, we also found that students thought that it was easier to build a mental map of the virtual space in REVERIE VG. This means that using a multimodal CVE in a gamified educational experience does not benefit spatial performance, but also it does not cause distraction. The paper ends with future work and conclusions and suggestions for improving mental map construction and user experience in multimodal CVEs

    Performance Evaluation of Distributed Computing Environments with Hadoop and Spark Frameworks

    Full text link
    Recently, due to rapid development of information and communication technologies, the data are created and consumed in the avalanche way. Distributed computing create preconditions for analyzing and processing such Big Data by distributing the computations among a number of compute nodes. In this work, performance of distributed computing environments on the basis of Hadoop and Spark frameworks is estimated for real and virtual versions of clusters. As a test task, we chose the classic use case of word counting in texts of various sizes. It was found that the running times grow very fast with the dataset size and faster than a power function even. As to the real and virtual versions of cluster implementations, this tendency is the similar for both Hadoop and Spark frameworks. Moreover, speedup values decrease significantly with the growth of dataset size, especially for virtual version of cluster configuration. The problem of growing data generated by IoT and multimodal (visual, sound, tactile, neuro and brain-computing, muscle and eye tracking, etc.) interaction channels is presented. In the context of this problem, the current observations as to the running times and speedup on Hadoop and Spark frameworks in real and virtual cluster configurations can be very useful for the proper scaling-up and efficient job management, especially for machine learning and Deep Learning applications, where Big Data are widely present.Comment: 5 pages, 1 table, 2017 IEEE International Young Scientists Forum on Applied Physics and Engineering (YSF-2017) (Lviv, Ukraine

    Semantic Virtual Environments with Adaptive Multimodal Interfaces

    Get PDF
    We present a system for real-time configuration of multimodal interfaces to Virtual Environments (VE). The flexibility of our tool is supported by a semantics-based representation of VEs. Semantic descriptors are used to define interaction devices and virtual entities under control. We use portable (XML) descriptors to define the I/O channels of a variety of interaction devices. Semantic description of virtual objects turns them into reactive entities with whom the user can communicate in multiple ways. This article gives details on the semantics-based representation and presents some examples of multimodal interfaces created with our system, including gestures-based and PDA-based interfaces, amongst others

    A multi-modal dance corpus for research into real-time interaction between humans in online virtual environments

    Get PDF
    We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology are locally available to them, can learn choerographies with teacher guidance in an online virtual ballet studio. As the data corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers perform a number of fixed choreographies, which are both graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. Although the data corpus is tailored specifically for an online dance class application scenario, the data is free to download and used for any research and development purposes

    The impact of multimodal collaborative virtual environments on learning: A gamified online debate

    Get PDF
    Online learning platforms are integrated systems designed to provide students and teachers with information, tools and resources to facilitate and enhance the delivery and management of learning. In recent years platform designers have introduced gamification and multimodal interaction as ways to make online courses more engaging and immersive. Current Web-based platforms provide a limited degree of immersion in learning experiences, thereby diminishing potential learning impact. To improve immersion, it is necessary to stimulate some or all the human senses by engaging users in an environment that perceptually surrounds them and allows intuitive and rich interaction with other users and its content. Learning in these collaborative virtual environments (CVEs) can be aided by increasing motivation and engagement through the gamification of the educational task. This rich interaction that combines multimodal stimulation and gamification of the learning experience has the potential to draw students into the learning experience and improve learning outcomes. This paper presents the results of an experimental study designed to evaluate the impact of multimodal real-time interaction on user experience and learning of gamified educational tasks completed in a CVE. Secondary school teachers and students participated in the study. The multimodal CVE is an accurate reconstruction of the European Parliament in Brussels, developed using the REVERIE (Real and Virtual Engagement In Realistic Immersive Environment) framework. In the study, we compared the impact of the VR parliament to a non-multimodal control (an educational platform called Edu-Simulation) for the same educational tasks. Our results show that the multimodal CVE improves student learning performance and aspects of subjective experience when compared to the non-multimodal control. More specifically it resulted in a more positive effect on the ability of the students to generate ideas compared to a non-multimodal control. It also facilitated a sense of presence (strong emotional and a degree of spatial) for students in the VE. The paper concludes with a discussion of future work that focusses on combining the best features of both systems in a hybrid system to increase its educational impact and evaluate the prototype in real-world educational scenarios

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear

    Towards Simulating Humans in Augmented Multi-party Interaction

    Get PDF
    Human-computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in the European AMI research project
    corecore