6,269 research outputs found

    No Grice: Computers that Lie, Deceive and Conceal

    Get PDF
    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behavior, and our interactions. Fusion of such information and reasoning about such information makes it possible, using computational models of human behavior and activities, to provide context- and person-aware interpretations of human behavior and activities, including determination of attitudes, moods, and emotions. Sensors include cameras, microphones, eye trackers, position and proximity sensors, tactile or smell sensors, et cetera. Sensors can be embedded in an environment, but they can also move around, for example, if they are part of a mobile social robot or if they are part of devices we carry around or are embedded in our clothes or body. \ud \ud Our daily life behavior and daily life interactions are recorded and interpreted. How can we use such environments and how can such environments use us? Do we always want to cooperate with these environments; do these environments always want to cooperate with us? In this paper we argue that there are many reasons that users or rather human partners of these environments do want to keep information about their intentions and their emotions hidden from these smart environments. On the other hand, their artificial interaction partner may have similar reasons to not give away all information they have or to treat their human partner as an opponent rather than someone that has to be supported by smart technology.\ud \ud This will be elaborated in this paper. We will survey examples of human-computer interactions where there is not necessarily a goal to be explicit about intentions and feelings. In subsequent sections we will look at (1) the computer as a conversational partner, (2) the computer as a butler or diary companion, (3) the computer as a teacher or a trainer, acting in a virtual training environment (a serious game), (4) sports applications (that are not necessarily different from serious game or education environments), and games and entertainment applications

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Machinima And Video-based Soft Skills Training

    Get PDF
    Multimedia training methods have traditionally relied heavily on video based technologies and significant research has shown these to be very effective training tools. However production of video is time and resource intensive. Machinima (pronounced \u27muh-sheen-eh-mah\u27) technologies are based on video gaming technology. Machinima technology allows video game technology to be manipulated into unique scenarios based on entertainment or training and practice applications. Machinima is the converting of these unique scenarios into video vignettes that tell a story. These vignettes can be interconnected with branching points in much the same way that education videos are interconnected as vignettes between decision points. This study addressed the effectiveness of machinima based soft-skills education using avatar actors versus the traditional video teaching application using human actors. This research also investigated the difference between presence reactions when using avatar actor produced video vignettes as compared to human actor produced video vignettes. Results indicated that the difference in training and/or practice effectiveness is statistically insignificant for presence, interactivity, quality and the skill of assertiveness. The skill of active listening presented a mixed result indicating the need for careful attention to detail in situations where body language and facial expressions are critical to communication. This study demonstrates that a significant opportunity exists for the exploitation of avatar actors in video based instruction

    Training Effects of Adaptive Emotive Responses From Animated Agents in Simulated Environments

    Get PDF
    Humans are distinct from machines in their capacity to emote, stimulate, and express emotions. Because emotions play such an important role in human interactions, human-like agents used in pedagogical roles for simulation-based training should properly reflect emotions. Currently, research concerning the development of this type of agent focuses on basic agent interface characteristics, as well as character building qualities. However, human-like agents should provide emotion-like qualities that are clearly expressed, properly synchronized, and that simulate complex, real-time interactions through adaptive emotion systems. The research conducted for this dissertation was a quantitative investigation using 3 (within) x 2 (between) x 3 (within) factorial design. A total of 56 paid participants consented to complete the study. Independent variables included emotion intensity (i.e., low, moderate, and high emotion), levels of expertise (novice participant versus experienced participant), and number of trials. Dependent measures included visual attention, emotional response towards the animated agents, simulation performance score, and learners\u27 perception of the pedagogical agent persona while participants interacted with a pain assessment and management simulation. While no relationships were indicated between the levels of emotion intensity portrayed by the animated agents and the participants\u27 visual attention, emotional response towards the animated agent, and simulation performance score, there were significant relationships between the level of expertise of the participant and the visual attention, emotional responses, and performance outcomes. The results indicated that nursing students had higher visual attention during their interaction with the animated agents. Additionally, nursing students expressed more neutral facial expression whereas experienced nurses expressed more emotional facial expressions towards the animated agents. The results of the simulation performance scores indicated that nursing students obtained higher performance scores in the pain assessment and management task than experienced nurses. Both groups of participants had a positive perception of the animated agents persona

    Can You Hear What I See? Nonverbal Communication and the Changing Face of TML

    Get PDF
    Business training and education are changing. Organizations have experienced dramatic changes in their structure, competitive environment, and the demographics and demands of their employees. As a result, organizations are seeking new and innovative ways to train employees. At the same time the evolution of technology mediated learning tools (TML) has resulted in flexible, interactive, engaging, learning technology tools that promote experiential learning, analytical thinking and problem solving. Simulation based technology mediated learning (SimTML) tools are gaining popularity in practice. SimTML facilitates lifelike environments that utilize animated pedagogical agents (APAs) which employ nonverbal communication traits in their interaction with the user. The effect is a lifelike, face-to-face interaction, between the user and the APA. The result is a flexible, interactive, engaging, TML tool that promotes experiential learning, analytical thinking and problem solving. This paper explores current SimTML technology, how we interact with learning technology, and provides selection and evaluation principles for organizations to use when evaluating SimTML tools for their own training programs

    The Design And Evaluation Of A Video Game To Help Train Perspective-taking And Empathy In Children With Autism Spectrum Disorder

    Get PDF
    This paper discusses the design, implementation, and evaluation of a serious game intended to reinforce applied behavior analysis (ABA) techniques used with children with autism spectrum disorder (ASD) by providing a low cost and easily accessible supplement to traditional methods. Past and recent research strongly supports the use of computer assisted instruction in the education of individuals with ASD (Moore & Calvert, 2000; Noor, Shahbodin, & Pee, 2012). Computer games have been shown to boost confidence and provide calming mechanisms (Griffiths, 2003) while being a safe environment for social exploration and learning (Moore, Cheng, McGrath, & Powell, 2005). Games increase children\u27s motivation and thus increase the rate of learning in computer mediated environments (Moore & Calvert, 2000). Furthermore, children with ASD are able to understand basic emotions and facial expressions in avatars more easily than in real-world interactions (Moore, Cheng, McGrath, & Powell, 2005). Perspective-taking (also known as role-taking) has been shown to be a crucial component and antecedent to empathy (Gomez-Becerra, Martin, Chavez-Brown, & Greer, 2007; Peng, Lee, & Heeter, 2010). Though symptoms vary across children with ASD, perspective-taking and empathy are abilities that have been shown to be limited across a wide spectrum of individuals with ASD and Asperger\u27s disorder (Gomez-Becerra, Martin, Chavez-Brown, & Greer, 2007). A game called WUBeeS was developed to aid young children with ASD in perspective taking and empathy by placing the player in the role of a caregiver to a virtual avatar. It is hypothesized that through the playing of this game over a series of trials, children with ASD will show an iv increase in the ability to discriminate emotions, provide appropriate responses to basic needs (e.g. feeding the avatar when it is hungry), and be able to communicate more clearly about emotions

    Human-centred design methods : developing scenarios for robot assisted play informed by user panels and field trials

    Get PDF
    Original article can be found at: http://www.sciencedirect.com/ Copyright ElsevierThis article describes the user-centred development of play scenarios for robot assisted play, as part of the multidisciplinary IROMEC1 project that develops a novel robotic toy for children with special needs. The project investigates how robotic toys can become social mediators, encouraging children with special needs to discover a range of play styles, from solitary to collaborative play (with peers, carers/teachers, parents, etc.). This article explains the developmental process of constructing relevant play scenarios for children with different special needs. Results are presented from consultation with panel of experts (therapists, teachers, parents) who advised on the play needs for the various target user groups and who helped investigate how robotic toys could be used as a play tool to assist in the children’s development. Examples from experimental investigations are provided which have informed the development of scenarios throughout the design process. We conclude by pointing out the potential benefit of this work to a variety of research projects and applications involving human–robot interactions.Peer reviewe

    Preface

    Get PDF
    • …
    corecore