932 research outputs found

    I Call Alexa to the Stand : The Privacy Implications of Anthropomorphizing Virtual Assistants Accompanying Smart-Home Technology

    Get PDF
    This Note offers a solution to the unique privacy issues posed by the increasingly humanlike interactions users have with virtual assistants, such as Amazon\u27s Alexa, which accompany smart-home technology. These interactions almost certainly result in the users engaging in the cognitive phenomenon of anthropomorphism--more specifically, an assignment of agency. This is a phenomenon that has heretofore been ignored in the legal context, but both the rapidity of technological advancement and inadequacy of current applicable legal doctrine necessitate its consideration now. Since users view these anthropomorphized virtual assistants as persons rather than machines, the law should treat them as such. To accommodate this reality, either the courts or Congress should grant them legal personhood. This can be accomplished through the application of an objective test that is satisfied by the establishment of social and moral connections with these virtual assistants. Further, due to the paramount privacy concerns resulting from this technology\u27s use within the home, courts should establish a new privilege that protects the communications between users and their virtual assistants

    Anthropomorphism of Intelligent Personal Assistants (IPAs): Antecedents and Consequences

    Get PDF
    Based on the distinctively anthropomorphic features of intelligent personal assistants (IPAs), this paper proposes a theoretical model to investigate the antecedents and consequences of IPA anthropomorphism based on three-factor theory. Specifically, it is hypothesized that anthropomorphic features of IPAs, which are synthesized speech quality, autonomy, sociability and personality, positively affect IPA anthropomorphism. Meanwhile IPA anthropomorphism influences IPA self-efficacy and social connection positively. IPA self-efficacy and social connection, in turn, are positively related to intention to explore IPAs. Scales will be developed and data will be collected through online survey. Then structural equation model (SEM) will be applied to validate the model

    Negative Consequences of Anthropomorphized Technology: A Bias-Threat-Illusion Model

    Get PDF
    Attributing human-like traits to information technology (IT) — leading to what is called anthropomorphized technology (AT)—is increasingly common by users of IT. Previous IS research has offered varying perspectives on AT, although it primarily focuses on the positive consequences. This paper aims to clarify the construct of AT and proposes a “bias–threat–illusion” model to classify the negative consequences of AT. Drawing on “three-factor theory of anthropomorphism” from social psychology and integrating self-regulation theory, we propose that failing to regulate the use of elicited agent knowledge and to control the intensified psychological needs (i.e., sociality and effectance) when interacting with AT leads to negative consequences: “transferring human bias,” “inducing threat to human agency,” and “creating illusionary relationship.” Based on this bias–threat–illusion model, we propose theory-driven remedies to attenuate negative consequences. We conclude with implications for IS theories and practice

    Thinking Technology as Human: Affordances, Technology Features, and Egocentric Biases in Technology Anthropomorphism

    Get PDF
    Advanced information technologies (ITs) are increasingly assuming tasks that have previously required human capabilities, such as learning and judgment. What drives this technology anthropomorphism (TA), or the attribution of humanlike characteristics to IT? What is it about users, IT, and their interactions that influences the extent to which people think of technology as humanlike? While TA can have positive effects, such as increasing user trust in technology, what are the negative consequences of TA? To provide a framework for addressing these questions, we advance a theory of TA that integrates the general three-factor anthropomorphism theory in social and cognitive psychology with the needs-affordances-features perspective from the information systems (IS) literature. The theory we construct helps to explain and predict which technological features and affordances are likely: (1) to satisfy users’ psychological needs, and (2) to lead to TA. More importantly, we problematize some negative consequences of TA. Technology features and affordances contributing to TA can intensify users’ anchoring with their elicited agent knowledge and psychological needs and also can weaken the adjustment process in TA under cognitive load. The intensified anchoring and weakened adjustment processes increase egocentric biases that lead to negative consequences. Finally, we propose a research agenda for TA and egocentric biases

    Naturalizing Anthropomorphism: Behavioral Prompts to Our Humanizing of Animals

    Get PDF
    Anthropomorphism is the use of human characteristics to describe or explain nonhuman animals. In the present paper, we propose a model for a unified study of such anthropomorphizing. We bring together previously disparate accounts of why and how we anthropomorphize and suggest a means to analyze anthropomorphizing behavior itself. We introduce an analysis of bouts of dyadic play between humans and a heavily anthropomorphized animal, the domestic dog. Four distinct patterns of social interaction recur in successful dog–human play: directed responses by one player to the other, indications of intent, mutual behaviors, and contingent activity. These findings serve as a preliminary answer to the question, “What behaviors prompt anthropomorphisms?” An analysis of anthropomorphizing is potentially useful in establishing a scientific basis for this behavior, in explaining its endurance, in the design of “lifelike” robots, and in the analysis of human interaction. Finally, the relevance of this developing scientific area to contemporary debates about anthropomorphizing behavior is discussed

    On the Matter of Robot Minds

    Get PDF
    The view that phenomenally conscious robots are on the horizon often rests on a certain philosophical view about consciousness, one we call “nomological behaviorism.” The view entails that, as a matter of nomological necessity, if a robot had exactly the same patterns of dispositions to peripheral behavior as a phenomenally conscious being, then the robot would be phenomenally conscious; indeed it would have all and only the states of phenomenal consciousness that the phenomenally conscious being in question has. We experimentally investigate whether the folk think that certain (hypothetical) robots made of silicon and steel would have the same conscious states as certain familiar biological beings with the same patterns of dispositions to peripheral behavior as the robots. Our findings provide evidence that the folk largely reject the view that silicon-based robots would have the sensations that they, the folk, attribute to the biological beings in question

    What’s In A Name?: Preschoolers Treat A Bug As A Moral Agent When It Has A Proper Name

    Get PDF
    Children encounter anthropomorphized objects daily: in advertisements, media, and books. Past research suggests that features like eyes or displaying intentional, goal-directed behaviors, increases how humanly non-human agents are perceived. When adults and children anthropomorphize, they become more socially connected and empathetic towards those entities. In advertising, this anthropomorphic effect is used to get people to connect with the product. This thesis explores what effect anthropomorphizing might have on preschoolers’ moral reasoning about those entities, and suggest that it increases the likelihood that children will explain non-human agents’ harmful actions in a moral sense. Specifically, the present study examines the anthropomorphic effect of a proper name on moral reasoning in preschoolers. Four- and 5-year-olds who heard a story about a caterpillar named “Pete” who was killing plants in their garden were more likely than children who heard about a “caterpillar” to think it was appropriate to squish it. We argue that because children believed Pete could experience the world (e.g., emotions) and had agency (e.g., intentional action) more so than an unnamed caterpillar, then Pete could also be held morally accountable for its harmful actions. A proper name has an interesting effect on preschoolers’ moral reasoning about non-human agents
    • 

    corecore