1 research outputs found

    Facilitation of human empathy through self-disclosure of anthropomorphic agents

    Full text link
    As AI technologies progress, social acceptance of AI agents including intelligent virtual agents and robots is getting to be even more important for more applications of AI in human society. One way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents. By empathizing, humans take positive and kind actions toward agents, and emphasizing makes it easier for humans to accept agents. In this study, we focused on self-disclosure from agents to humans in order to realize anthropomorphic agents that elicit empathy from humans. Then, we experimentally investigated the possibility that an agent's self-disclosure facilitates human empathy. We formulate hypotheses and experimentally analyze and discuss the conditions in which humans have more empathy for agents. This experiment was conducted with a three-way mixed plan, and the factors were the agents' appearance (human, robot), self-disclosure (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure), and empathy before and after a video stimulus. An analysis of variance was performed using data from 576 participants. As a result, we found that the appearance factor did not have a main effect, and self-disclosure, which is highly relevant to the scenario used, facilitated more human empathy with statistically significant difference. We also found that no self-disclosure suppressed empathy. These results support our hypotheses.Comment: 20 pages, 8 figures, 2 tables, submitted to PLOS ONE Journa
    corecore