27 research outputs found

    Improving of Robotic Virtual Agent's errors that are accepted by reaction and human's preference

    Full text link
    One way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents. In this study, we focused on a task between an agent and a human in which the agent makes a mistake. To investigate significant factors for designing a robotic agent that can promote humans empathy, we experimentally examined the hypothesis that agent reaction and human's preference affect human empathy and acceptance of the agent's mistakes. The experiment consisted of a four-condition, three-factor mixed design with agent reaction, selected agent's body color for human's preference, and pre- and post-task as factors. The results showed that agent reaction and human's preference did not affect empathy toward the agent but did allow the agent to make mistakes. It was also shown that empathy for the agent decreased when the agent made a mistake on the task. The results of this study provide a way to control impressions of the robotic virtual agent's behaviors, which are increasingly used in society.Comment: 13 pages, 4 figures, 5 tables, submitted ICSR2023. arXiv admin note: text overlap with arXiv:2206.0612

    Designing the future of education:From tutor robots to intelligent playthings

    Get PDF
    Robots exhibiting social behaviors have shown promising effects on children’s education. Like many analogue and digital educational devices in the past, robotic technology brings concerns along with opportunities for innovation. Tutor robots in the classroom are not meant to replace teachers, but to complement existing curricula with personalized learning experiences and one-on-one tutoring. The educational paradigm of tutor robots have insofar limited to replicate models from formal education, but many are the technical, ethical and de- sign challenges to bring this paradigm forward. Moreover, the educational paradigm of tutor robots de-facto perpetuates the exclusion of playful learning by doing with peers and objects, which is arguably the most important aspect of children’s upbringing and, yet, themost overlooked in formal education. Increasingly, robotics applications to children’s education are shifting from tutor-like paradigm to an intelligent playthings paradigm: to promote active, open-ended and independent learning through play with peers. This article is an invitation to reflect on the role that robotic technology, especially tutor robots and intelligent playthings, could play for children’s learning and development. The complexity of designing for children’s learning highlights the necessity to start a trans-disciplinary discussion to shape the future of education and foster a positive societal impact of robots for children’s learning

    Expression of Grounded Affect: How Much Emotion Can Arousal Convey?

    Get PDF
    Springer: © 2020 Springer Nature Switzerland AG. This is a post-peer-review, pre-copyedit version of Hickton L., Lewis M., Cañamero L. (2020) Expression of Grounded Affect: How Much Emotion Can Arousal Convey?. In: Mohammad A., Dong X., Russo M. (eds) Towards Autonomous Robotic Systems. TAROS 2020. Lecture Notes in Computer Science, vol 12228. Springer, Cham. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-63486-5_26.In this paper we consider how non-humanoid robots can communicate their affective state via bodily forms of communication (kinesics), and the extent to which this influences how humans respond to them. We propose a simple model of grounded affect and kinesic expression before presenting the qualitative findings of an exploratory study (N=9), during which participants were interviewed after watching expressive and non-expressive hexapod robots perform different ‘scenes’. A summary of these interviews is presented and a number of emerging themes are identified and discussed. Whilst our findings suggest that the expressive robot did not evoke significantly greater empathy or altruistic intent in humans than the control robot, the expressive robot stimulated greater desire for interaction and was also more likely to be attributed with emotion

    Robots can defuse high-intensity conflict situations

    Get PDF

    Facilitation of human empathy through self-disclosure of anthropomorphic agents

    Full text link
    As AI technologies progress, social acceptance of AI agents including intelligent virtual agents and robots is getting to be even more important for more applications of AI in human society. One way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents. By empathizing, humans take positive and kind actions toward agents, and emphasizing makes it easier for humans to accept agents. In this study, we focused on self-disclosure from agents to humans in order to realize anthropomorphic agents that elicit empathy from humans. Then, we experimentally investigated the possibility that an agent's self-disclosure facilitates human empathy. We formulate hypotheses and experimentally analyze and discuss the conditions in which humans have more empathy for agents. This experiment was conducted with a three-way mixed plan, and the factors were the agents' appearance (human, robot), self-disclosure (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure), and empathy before and after a video stimulus. An analysis of variance was performed using data from 576 participants. As a result, we found that the appearance factor did not have a main effect, and self-disclosure, which is highly relevant to the scenario used, facilitated more human empathy with statistically significant difference. We also found that no self-disclosure suppressed empathy. These results support our hypotheses.Comment: 20 pages, 8 figures, 2 tables, submitted to PLOS ONE Journa

    The case of classroom robots: teachers’ deliberations on the ethical tensions

    Get PDF
    Robots are increasingly being studied for use in education. It is expected that robots will have the potential to facilitate children’s learning and function autonomously within real classrooms in the near future. Previous research has raised the importance of designing acceptable robots for different practices. In parallel, scholars have raised ethical concerns surrounding children interacting with robots. Drawing on a Responsible Research and Innovation perspective, our goal is to move away from research concerned with designing features that will render robots more socially acceptable by end users toward a reflective dialogue whose goal is to consider the key ethical issues and long-term consequences of implementing classroom robots for teachers and children in primary education. This paper presents the results from several focus groups conducted with teachers in three European countries. Through a thematic analysis, we provide a theoretical account of teachers’ perspectives on classroom robots pertaining to privacy, robot role, effects on children and responsibility. Implications for the field of educational robotics are discussed.info:eu-repo/semantics/acceptedVersio
    corecore