1,598 research outputs found

    What Does it Take to be a Social Agent?

    Get PDF
    The aim of this paper is to present a philosophically inspired list of minimal requirements for social agency that may serve as a guideline for social robotics. Such a list does not aim at detailing the cognitive processes behind sociality but at providing an implementation-free characterization of the capacities and skills associated with sociality. We employ the notion of intentional stance as a methodological ground to study intentional agency and extend it into a social stance that takes into account social features of behavior. We discuss the basic requirements of sociality and different ways to understand them, and suggest some potential benefits of understanding them in an instrumentalist way in the context of social robotics.The aim of this paper is to present a philosophically inspired list of minimal requirements for social agency that may serve as a guideline for social robotics. Such a list does not aim at detailing the cognitive processes behind sociality but at providing an implementation-free characterization of the capacities and skills associated with sociality. We employ the notion of intentional stance as a methodological ground to study intentional agency and extend it into a social stance that takes into account social features of behavior. We discuss the basic requirements of sociality and different ways to understand them, and suggest some potential benefits of understanding them in an instrumentalist way in the context of social robotics.Peer reviewe

    The Contribution of Society to the Construction of Individual Intelligence

    Get PDF
    It is argued that society is a crucial factor in the construction of individual intelligence. In other words that it is important that intelligence is socially situated in an analogous way to the physical situation of robots. Evidence that this may be the case is taken from developmental linguistics, the social intelligence hypothesis, the complexity of society, the need for self-reflection and autism. The consequences for the development of artificial social agents is briefly considered. Finally some challenges for research into socially situated intelligence are highlighted

    Negative Consequences of Anthropomorphized Technology: A Bias-Threat-Illusion Model

    Get PDF
    Attributing human-like traits to information technology (IT) — leading to what is called anthropomorphized technology (AT)—is increasingly common by users of IT. Previous IS research has offered varying perspectives on AT, although it primarily focuses on the positive consequences. This paper aims to clarify the construct of AT and proposes a “bias–threat–illusion” model to classify the negative consequences of AT. Drawing on “three-factor theory of anthropomorphism” from social psychology and integrating self-regulation theory, we propose that failing to regulate the use of elicited agent knowledge and to control the intensified psychological needs (i.e., sociality and effectance) when interacting with AT leads to negative consequences: “transferring human bias,” “inducing threat to human agency,” and “creating illusionary relationship.” Based on this bias–threat–illusion model, we propose theory-driven remedies to attenuate negative consequences. We conclude with implications for IS theories and practice

    Thinking Technology as Human: Affordances, Technology Features, and Egocentric Biases in Technology Anthropomorphism

    Get PDF
    Advanced information technologies (ITs) are increasingly assuming tasks that have previously required human capabilities, such as learning and judgment. What drives this technology anthropomorphism (TA), or the attribution of humanlike characteristics to IT? What is it about users, IT, and their interactions that influences the extent to which people think of technology as humanlike? While TA can have positive effects, such as increasing user trust in technology, what are the negative consequences of TA? To provide a framework for addressing these questions, we advance a theory of TA that integrates the general three-factor anthropomorphism theory in social and cognitive psychology with the needs-affordances-features perspective from the information systems (IS) literature. The theory we construct helps to explain and predict which technological features and affordances are likely: (1) to satisfy users’ psychological needs, and (2) to lead to TA. More importantly, we problematize some negative consequences of TA. Technology features and affordances contributing to TA can intensify users’ anchoring with their elicited agent knowledge and psychological needs and also can weaken the adjustment process in TA under cognitive load. The intensified anchoring and weakened adjustment processes increase egocentric biases that lead to negative consequences. Finally, we propose a research agenda for TA and egocentric biases

    Triggering social interactions:chimpanzees respond to imitation by a humanoid robot and request responses from it

    Get PDF
    Even the most rudimentary social cues may evoke affiliative responses in humans and promote socialcommunication and cohesion. The present work tested whether such cues of an agent may also promotecommunicative interactions in a nonhuman primate species, by examining interaction-promoting behavioursin chimpanzees. Here, chimpanzees were tested during interactions with an interactive humanoid robot, whichshowed simple bodily movements and sent out calls. The results revealed that chimpanzees exhibited twotypes of interaction-promoting behaviours during relaxed or playful contexts. First, the chimpanzees showedprolonged active interest when they were imitated by the robot. Second, the subjects requested ‘social’responses from the robot, i.e. by showing play invitations and offering toys or other objects. This study thusprovides evidence that even rudimentary cues of a robotic agent may promote social interactions inchimpanzees, like in humans. Such simple and frequent social interactions most likely provided a foundationfor sophisticated forms of affiliative communication to emerge

    Chapter 13 Haptic Creatures

    Get PDF
    Collaborations between entertainment industries and artificial intelligence researchers in Japan have since the mid-1990s produced a growing interest in modeling affect and emotion for use in mass-produced social robots. Robot producers and marketers reason that such robot companions can provide comfort, healing (iyashi), and intimacy in light of attenuating social bonds and increased socioeconomic stress characteristic of Japanese society since the collapse of the country’s bubble economy in the early 1990s. While many of these robots with so-called “artificial emotional intelligence” are equipped with rudimentary capacities to “read” predefined human emotion through such mechanisms as facial expression recognition, a new category of companion robots are more experimental. These robots do not interpret human emotion through affect-sensing software but rather invite human-robot interaction through affectively pleasing forms of haptic feedback. These new robots are called haptic creatures: robot companions designed to deliver a sense of comforting presence through a combination of animated movements and healing touch. Integrating historical analysis with ethnographic interviews with new users of these robots, and focusing in particular on the cat-like cushion robot Qoobo, this chapter argues that while companion robots are designed in part to understand specific human emotions, haptic creatures are created as experimental devices that can generate new and unexpected pleasures of affective care unique to human-robot relationships. It suggests that this distinction is critical for understanding and evaluating how corporations seek to use human-robot affect as a means to deliver care to consumers while also researching and building new markets for profit maximization
    • 

    corecore