3 research outputs found

    The Benefits of Robot Deception in Search and Rescue: Computational Approach for Deceptive Action Selection via Case-Based Reasoning

    Get PDF
    © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.DOI: 10.1109/SSRR.2015.7443002By increasing the use of autonomous rescue robots in search and rescue (SAR), the chance of interaction between rescue robots and human victims also grows. More specifically, when autonomous rescue robots are considered in SAR, it is important for robots to handle sensitively human victims’ emotions. Deception can potentially be used effectively by robots to control human victims’ fear and shock as used by human rescuers. In this paper, we introduce robotic deception in SAR contexts and present a novel computational approach for an autonomous rescue robot’s deceptive action selection mechanism

    A deceptive robot referee in a multiplayer gaming environment

    No full text

    The persuasiveness of humanlike computer interfaces varies more through narrative characterization than through the uncanny valley

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Just as physical appearance affects persuasion and compliance in human communication, it may also bias the processing of information from avatars, computer-animated characters, and other computer interfaces with faces. Although the most persuasive of these interfaces are often the most humanlike, they incur the greatest risk of falling into the uncanny valley, the loss of empathy associated with eerily human characters. The uncanny valley could delay the acceptance of humanlike interfaces in everyday roles. To determine the extent to which the uncanny valley affects persuasion, two experiments were conducted online with undergraduates from Indiana University. The first experiment (N = 426) presented an ethical dilemma followed by the advice of an authority figure. The authority was manipulated in three ways: depiction (recorded or animated), motion quality (smooth or jerky), and recommendation (disclose or refrain from disclosing sensitive information). Of these, only the recommendation changed opinion about the dilemma, even though the animated depiction was eerier than the human depiction. These results indicate that compliance with an authority persists even when using a realistic computer-animated double. The second experiment (N = 311) assigned one of two different dilemmas in professional ethics involving the fate of a humanlike character. In addition to the dilemma, there were three manipulations of the character’s human realism: depiction (animated human or humanoid robot), voice (recorded or synthesized), and motion quality (smooth or jerky). In one dilemma, decreasing depiction realism or increasing voice realism increased eeriness. In the other dilemma, increasing depiction realism decreased perceived competence. However, in both dilemmas realism had no significant effect on whether to punish the character. Instead, the willingness to punish was predicted in both dilemmas by narratively characterized trustworthiness. Together, the experiments demonstrate both direct and indirect effects of narratives on responses to humanlike interfaces. The effects of human realism are inconsistent across different interactions, and the effects of the uncanny valley may be suppressed through narrative characterization
    corecore