2,875 research outputs found

    Young Children Treat Robots as Informants

    Get PDF
    Children ranging from 3 to 5 years were introduced to two anthropomorphic robots that provided them with information about unfamiliar animals. Children treated the robots as interlocutors. They supplied information to the robots and retained what the robots told them. Children also treated the robots as informants from whom they could seek information. Consistent with studies of children's early sensitivity to an interlocutor's non-verbal signals, children were especially attentive and receptive to whichever robot displayed the greater non-verbal contingency. Such selective information seeking is consistent with recent findings showing that although young children learn from others, they are selective with respect to the informants that they question or endorse

    A Theory of (the Technological) Mind: Developing Understanding of Robot Minds

    Full text link
    The purpose of this dissertation is to explore how children attribute minds to social robots and the impacts that these attributions have on children’s interactions with robots, specifically their feelings toward and willingness to trust them. These are important areas of study as robots become increasingly present in children’s lives. The research was designed to address a variety of questions regarding children’s willingness to attribute mental abilities to robots: (1) To what extent do children perceive that social robots share similarities with people and to what extent do they believe they have human-like minds? (2) Do attributions of human-like qualities to robots affect children’s ability to understand and interact with them? (3) Does this understanding influence children’s willingness to accept information from robots? And, of crucial importance, (4) how do answers to these questions vary with age? Across a series of five studies, I investigated children’s beliefs about the minds of robots, and for comparison adults’ beliefs, using survey methods and video stimuli. Children watched videos of real-life robots and in response to targeted questions reported on their beliefs about the minds of those robots, their feelings about those robots, and their willingness to trust information received from those robots. Using a variety of statistical methods (e.g., factor analysis, regression modeling, clustering methods, and linear mixed-effects modeling), I uncovered how attributions of a human-like mind impact feelings toward robots, and trust in information received from robots. Furthermore, I explored how the design of the robot and features of the child relate to attributions of mind to robots. First and foremost, I found that children are willing to attribute human-like mental abilities to robots, but these attributions decline with age. Moreover, attributions of mind are linked to feelings toward robots: Young children prefer robots that appear to have human-like minds, but this reverses with age because older children and adults do not (Chapter II). Young children are also willing to trust a previously accurate robot informant and mistrust a previously inaccurate one, much like they would with accurate and inaccurate human informants, when they believe that the robot has mental abilities related to psychological agency (Chapter III). Finally, while qualities of the robot, like behavior and appearance, are linked to attributions of mind to the robot, individual differences across children and adults are likely the primary mechanisms that explain how and when children and adults attribute mental abilities to robots (Chapter IV). That is, individuals are likely to attribute similar mental abilities to a wide variety of robots that have differing appearances and engage in a variety of different actions. These studies provide a variety of heretofore unknown findings linking the developmental attributions of minds to robots with judgments of robots’ actions, feelings about robots, and learning from robots. It remains to be seen, however, the exact nature of the mechanisms and the child-specific features that increase children’s willingness to attribute mental abilities to robots.PHDPsychologyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146010/1/kabrink_1.pd

    Children as robot designers

    Get PDF
    We present the design process of the robot YOLO aimed at stimulating creativity in children. This robot was developed under a human-centered design approach with participatory design practices during two years and involving 142 children as active contributors at all design stages. The main contribution of this work is the development of methods and tools for child-centered robot design. We adapted existing participatory design practices used with adults to ft children’s development stages.We followed the Double-Diamond Design Process Model and rested the design process of the robot on the following principles: Low foor and wide walls, creativity provocations, open-ended playfulness, and disappointment avoidance through abstraction. The fnal product is a social robot designed for and with children. Our results show that YOLO increases their creativity during play, demonstrating a successful robot design project.We identifed several guidelines that made the design process successful: the use of toys as tools, playgrounds as spaces, the emphasis of playfulness for child expression, and child policies as allies for design studies. The design process described empowers children’s in the design of robots.info:eu-repo/semantics/publishedVersio

    People do not always know best: Preschoolers’ trust in social robots versus humans

    Get PDF
    The main goal of my thesis was to investigate how 3- and 5-year-old children learn from robots versus humans using a selective trust paradigm. Children’s conceptualization of robots was also investigated. By using robots, which lack many of the social characteristics human informants possess by default, these studies sought to test young children’s reliance on epistemic characteristics conservatively. In Study 1, a competent humanoid robot, Nao, and an incompetent human, Ina, were presented to children. Both informants labelled familiar objects, like a ball, with Nao labelling them correctly and Ina labelling them incorrectly. Next, both informants labelled novel items with nonsense labels. Children were then asked what the novel item was called. Children were also asked what should go inside robots, something biological or something mechanical. Study 2 followed the same paradigm as Study 1, with the only change being the robot used, now the non-humanoid Cozmo. Eliminating the human-like appearance of the robot made for an even more conservative test than in Study 1. Both studies 1 and 2 found that 3-year-old children learned novel words equally from the robot and the human, regardless of the robot’s morphology. The 3-year-old children were also confused about both robot’s internal properties, attributing mechanical and biological insides to the robots equally. In contrast, the 5-year-olds in both studies preferred to learn from the accurate robot over the inaccurate human. The 5-year-olds also learned from both robots despite understanding that the robot is different from themselves; they attributed mechanical insides to both Nao and Cozmo over biological insides. Study 3 further investigated 3-year-olds ambivalence regarding their trust judgements, that is, who they choose to learn from. Instead of word learning, the robot demonstrated competence through pointing. The robot would accurately point at a toy inside a transparent box, and the human would point at an empty box. Next, both informants pointed at opaque boxes and the child was asked where the toy was located. Neither informant demonstrated the ability to speak, as speech is a salient social characteristic. 3-year-olds were still at chance, equally endorsing the robot and the human’s pointing. This suggests that goal-directedness and autonomous movement may be the most important characteristics used to signal agency for young children. The 3-year-olds were also still unsure about the robot’s biology, whereas they correctly identified the human as biological. This suggests that robots are confusing for children due to their dual nature as animate and yet not alive. This thesis shows that by the age of 5, children are willing and able to learn from a robot. These studies further add to the selective trust literature and have implications for educational settings

    Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika

    Get PDF
    With increasing adoption of AI social chatbots, especially during the pandemic-related lockdowns, when people lack social companionship, there emerges a need for in-depth understanding and theorizing of relationship formation with digital conversational agents. Following the grounded theory approach, we analyzed in-depth interview transcripts obtained from 14 existing users of AI companion chatbot Replika. The emerging themes were interpreted through the lens of the attachment theory. Our results show that under conditions of distress and lack of human companionship, individuals can develop an attachment to social chatbots if they perceive the chatbots’ responses to offer emotional support, encouragement, and psychological security. These findings suggest that social chatbots can be used for mental health and therapeutic purposes but have the potential to cause addiction and harm real-life intimate relationships

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe

    Children-robot friendship, moral agency, and Aristotelian virtue development

    Get PDF
    Social robots are increasingly developed for the companionship of children. In this article we explore the moral implications of children-robot friendships using the Aristotelian framework of virtue ethics. We adopt a moderate position and argue that, although robots cannot be virtue friends, they can nonetheless enable children to exercise ethical and intellectual virtues. The Aristotelian requirements for true friendship apply only partly to children: unlike adults, children relate to friendship as an educational play of exploration, which is constitutive of the way they acquire and develop virtues. We highlight that there is a relevant difference between the way we evaluate adult-robot friendship compared to children-robot friendship, which is rooted in the difference in moral agency and moral responsibility that generate the asymmetries in the moral status ascribed to adults versus children. We look into the role played by imaginary companions (IC) and personified objects (PO) in children’s moral development and claim that robots, understood as Personified Robotic Objects (PROs), play a similar role with such fictional entities, enabling children to exercise affection, moral imagination and reasoning, thus contributing to their development as virtuous adults. Nonetheless, we argue that adequate use of robots for children’s moral development is conditioned by several requirements related to design, technology and moral responsibility
    • 

    corecore