2 research outputs found

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect users’ self-perceptions, perceptions of the technology, how users interact with the technology, and the users’ performance. Examples include changes in a users’ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in users’ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the users’ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies

    The persuasiveness of humanlike computer interfaces varies more through narrative characterization than through the uncanny valley

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Just as physical appearance affects persuasion and compliance in human communication, it may also bias the processing of information from avatars, computer-animated characters, and other computer interfaces with faces. Although the most persuasive of these interfaces are often the most humanlike, they incur the greatest risk of falling into the uncanny valley, the loss of empathy associated with eerily human characters. The uncanny valley could delay the acceptance of humanlike interfaces in everyday roles. To determine the extent to which the uncanny valley affects persuasion, two experiments were conducted online with undergraduates from Indiana University. The first experiment (N = 426) presented an ethical dilemma followed by the advice of an authority figure. The authority was manipulated in three ways: depiction (recorded or animated), motion quality (smooth or jerky), and recommendation (disclose or refrain from disclosing sensitive information). Of these, only the recommendation changed opinion about the dilemma, even though the animated depiction was eerier than the human depiction. These results indicate that compliance with an authority persists even when using a realistic computer-animated double. The second experiment (N = 311) assigned one of two different dilemmas in professional ethics involving the fate of a humanlike character. In addition to the dilemma, there were three manipulations of the character’s human realism: depiction (animated human or humanoid robot), voice (recorded or synthesized), and motion quality (smooth or jerky). In one dilemma, decreasing depiction realism or increasing voice realism increased eeriness. In the other dilemma, increasing depiction realism decreased perceived competence. However, in both dilemmas realism had no significant effect on whether to punish the character. Instead, the willingness to punish was predicted in both dilemmas by narratively characterized trustworthiness. Together, the experiments demonstrate both direct and indirect effects of narratives on responses to humanlike interfaces. The effects of human realism are inconsistent across different interactions, and the effects of the uncanny valley may be suppressed through narrative characterization
    corecore