38,332 research outputs found

    Visualising mixed reality simulation for multiple users

    Get PDF
    Cowling, MA ORCiD: 0000-0003-1444-1563Blended reality seeks to encourage co-presence in the classroom, blending student experience across virtual and physical worlds. In a similar way, Mixed Reality, a continuum between virtual and real environments, is now allowing learners to work in both the physical and the digital world simultaneously, especially when combined with an immersive headset experience. This experience provides innovative new experiences for learning, but faces the challenge that most of these experiences are single user, leaving others outside the new environment. The question therefore becomes, how can a mixed reality simulation be experienced by multiple users, and how can we present that simulation effectively to users to create a true blended reality environment? This paper proposes a study that uses existing screen production research into the user and spectator to produce a mixed reality simulation suitable for multiple users. A research method using Design Based Research is also presented to assess the usability of the approach

    Dynamic Facial Expression of Emotion Made Easy

    Full text link
    Facial emotion expression for virtual characters is used in a wide variety of areas. Often, the primary reason to use emotion expression is not to study emotion expression generation per se, but to use emotion expression in an application or research project. What is then needed is an easy to use and flexible, but also validated mechanism to do so. In this report we present such a mechanism. It enables developers to build virtual characters with dynamic affective facial expressions. The mechanism is based on Facial Action Coding. It is easy to implement, and code is available for download. To show the validity of the expressions generated with the mechanism we tested the recognition accuracy for 6 basic emotions (joy, anger, sadness, surprise, disgust, fear) and 4 blend emotions (enthusiastic, furious, frustrated, and evil). Additionally we investigated the effect of VC distance (z-coordinate), the effect of the VC's face morphology (male vs. female), the effect of a lateral versus a frontal presentation of the expression, and the effect of intensity of the expression. Participants (n=19, Western and Asian subjects) rated the intensity of each expression for each condition (within subject setup) in a non forced choice manner. All of the basic emotions were uniquely perceived as such. Further, the blends and confusion details of basic emotions are compatible with findings in psychology

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Understanding the Cognitive Impact of Emerging Web Technologies: A Research Focus Area for Embodied, Extended and Distributed Approaches to Cognition

    No full text
    Alongside existing research into the social, political and economic impacts of the Web, there is also a need to explore the effects of the Web on our cognitive profile. This is particularly so as the range of interactive opportunities we have with the Web expands under the influence of a range of emerging technologies. Embodied, extended and distributed approaches to cognition are relevant to understanding the potential cognitive impact of these new technologies because of the emphasis they place on extra-neural and extra-corporeal factors in the shaping of our cognitive capabilities at both an individual and collective level. The current paper outlines a number of areas where embodied, extended and distributed approaches to cognition are useful in understanding the impact of emerging Web technologies on future forms of both human and machine intelligence

    Proceedings of the International Workshop on EuroPLOT Persuasive Technology for Learning, Education and Teaching (IWEPLET 2013)

    Get PDF
    "This book contains the proceedings of the International Workshop on EuroPLOT Persuasive Technology for Learning, Education and Teaching (IWEPLET) 2013 which was held on 16.-17.September 2013 in Paphos (Cyprus) in conjunction with the EC-TEL conference. The workshop and hence the proceedings are divided in two parts: on Day 1 the EuroPLOT project and its results are introduced, with papers about the specific case studies and their evaluation. On Day 2, peer-reviewed papers are presented which address specific topics and issues going beyond the EuroPLOT scope. This workshop is one of the deliverables (D 2.6) of the EuroPLOT project, which has been funded from November 2010 – October 2013 by the Education, Audiovisual and Culture Executive Agency (EACEA) of the European Commission through the Lifelong Learning Programme (LLL) by grant #511633. The purpose of this project was to develop and evaluate Persuasive Learning Objects and Technologies (PLOTS), based on ideas of BJ Fogg. The purpose of this workshop is to summarize the findings obtained during this project and disseminate them to an interested audience. Furthermore, it shall foster discussions about the future of persuasive technology and design in the context of learning, education and teaching. The international community working in this area of research is relatively small. Nevertheless, we have received a number of high-quality submissions which went through a peer-review process before being selected for presentation and publication. We hope that the information found in this book is useful to the reader and that more interest in this novel approach of persuasive design for teaching/education/learning is stimulated. We are very grateful to the organisers of EC-TEL 2013 for allowing to host IWEPLET 2013 within their organisational facilities which helped us a lot in preparing this event. I am also very grateful to everyone in the EuroPLOT team for collaborating so effectively in these three years towards creating excellent outputs, and for being such a nice group with a very positive spirit also beyond work. And finally I would like to thank the EACEA for providing the financial resources for the EuroPLOT project and for being very helpful when needed. This funding made it possible to organise the IWEPLET workshop without charging a fee from the participants.

    ‘It’s Almost Like Talking to a Person’: Student Disclosure to Pedagogical Agents in Sensitive Settings.

    Get PDF
    This paper presents findings of a pilot study which used pedagogical agents to examine disclosure in educational settings. The study used responsive evaluation to explore how use of pedagogical agents might affect students’ truthfulness and disclosure by asking them to respond to a lifestyle choices survey delivered by a web-based pedagogical agent. Findings indicate that emotional connection with pedagogical agents were intrinsic to the user’s sense of trust and therefore likely to affect levels of truthfulness and engagement. The implications of this study are that truthfulness, personalisation and emotional engagement are all vital components in using pedagogical agents to enhance online learning

    Remote Mixed Reality Collaborative Laboratory Activities: Learning Activities within the InterReality Portal

    Get PDF
    Technology is changing our way to experience education from one-dimensional (physical) to multi-dimensional (physical and virtual) education using a diversity of resources such as web-based platforms (eLearning), videoconferences, eBooks and innovative technologies (e.g. mixed reality, virtual worlds, immersive technology, etc.). This represents bigger opportunities for universities and educational institutions to collaborate with partners from around the world and to be part of today's knowledge economy. This also enables greater opportunities to experience distance learning, modifying our experience of both space and time, changing specific spatial locations to ubiquitous locations and time as asynchronous/synchronous according to our necessities. The use of virtual and remote laboratory activities is an example of the application of some of these concepts. In this work-in-progress paper we propose a different approach to the integration of the physical and virtual world by creating remote mixed reality collaborative laboratory activities within an Inter Reality Portal learning environment, thereby extending our previous progress towards these goals. The learning goal of our mixed reality lab activity is to produce Internet-of-Things-based computer projects using combinations of Cross-Reality (xReality) and Virtual objects based on co-creative and collaborative interaction between geographically dispersed students. © 2012 IEEE

    Using mixed-reality to develop smart environments

    Get PDF
    Smart homes, smart cars, smart classrooms are now a reality as the world becomes increasingly interconnected by ubiquitous computing technology. The next step is to interconnect such environments, however there are a number of significant barriers to advancing research in this area, most notably the lack of available environments, standards and tools etc. A possible solution is the use of simulated spaces, nevertheless as realistic as strive to make them, they are, at best, only approximations to the real spaces, with important differences such as utilising idealised rather than noisy sensor data. In this respect, an improvement to simulation is emulation, which uses specially adapted physical components to imitate real systems and environments. In this paper we present our work-in-progress towards the creation of a development tool for intelligent environments based on the interconnection of simulated, emulated and real intelligent spaces using a distributed model of mixed reality. To do so, we propose the use of physical/virtual components (xReality objects) able to be combined through a 3D graphical user interface, sharing real-time information. We present three scenarios of interconnected real and emulated spaces, used for education, achieving integration between real and virtual worlds
    corecore