4,093 research outputs found

    A Theory of (the Technological) Mind: Developing Understanding of Robot Minds

    Full text link
    The purpose of this dissertation is to explore how children attribute minds to social robots and the impacts that these attributions have on children’s interactions with robots, specifically their feelings toward and willingness to trust them. These are important areas of study as robots become increasingly present in children’s lives. The research was designed to address a variety of questions regarding children’s willingness to attribute mental abilities to robots: (1) To what extent do children perceive that social robots share similarities with people and to what extent do they believe they have human-like minds? (2) Do attributions of human-like qualities to robots affect children’s ability to understand and interact with them? (3) Does this understanding influence children’s willingness to accept information from robots? And, of crucial importance, (4) how do answers to these questions vary with age? Across a series of five studies, I investigated children’s beliefs about the minds of robots, and for comparison adults’ beliefs, using survey methods and video stimuli. Children watched videos of real-life robots and in response to targeted questions reported on their beliefs about the minds of those robots, their feelings about those robots, and their willingness to trust information received from those robots. Using a variety of statistical methods (e.g., factor analysis, regression modeling, clustering methods, and linear mixed-effects modeling), I uncovered how attributions of a human-like mind impact feelings toward robots, and trust in information received from robots. Furthermore, I explored how the design of the robot and features of the child relate to attributions of mind to robots. First and foremost, I found that children are willing to attribute human-like mental abilities to robots, but these attributions decline with age. Moreover, attributions of mind are linked to feelings toward robots: Young children prefer robots that appear to have human-like minds, but this reverses with age because older children and adults do not (Chapter II). Young children are also willing to trust a previously accurate robot informant and mistrust a previously inaccurate one, much like they would with accurate and inaccurate human informants, when they believe that the robot has mental abilities related to psychological agency (Chapter III). Finally, while qualities of the robot, like behavior and appearance, are linked to attributions of mind to the robot, individual differences across children and adults are likely the primary mechanisms that explain how and when children and adults attribute mental abilities to robots (Chapter IV). That is, individuals are likely to attribute similar mental abilities to a wide variety of robots that have differing appearances and engage in a variety of different actions. These studies provide a variety of heretofore unknown findings linking the developmental attributions of minds to robots with judgments of robots’ actions, feelings about robots, and learning from robots. It remains to be seen, however, the exact nature of the mechanisms and the child-specific features that increase children’s willingness to attribute mental abilities to robots.PHDPsychologyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146010/1/kabrink_1.pd

    The distracted robot: what happens when artificial agents behave like us

    Get PDF
    In everyday life, we are frequently exposed to different smart technologies. From our smartphones to avatars in computer games, and soon perhaps humanoid robots, we are surrounded by artificial agents created to interact with us. Already during the design phase of an artificial agent, engineers often endow it with functions aimed to promote the interaction and engagement with it, ranging from its \u201ccommunicative\u201d abilities to the movements it produces. Still, whether an artificial agent that can behave like a human could boost the spontaneity and naturalness of interaction is still an open question. Even during the interaction with conspecifics, humans rely partially on motion cues when they need to infer the mental states underpinning behavior. Similar processes may be activated during the interaction with embodied artificial agents, such as humanoid robots. At the same time, a humanoid robot that can faithfully reproduce human-like behavior may undermine the interaction, causing a shift in attribution: from being endearing to being uncanny. Furthermore, it is still not clear whether individual biases and prior knowledge related to artificial agents can override perceptual evidence of human-like traits. A relatively new area of research emerged in the context of investigating individuals\u2019 reactions towards robots, widely referred to as Human-Robot Interaction (HRI). HRI is a multidisciplinary community that comprises psychologists, neuroscientists, philosophers as well as roboticists, and engineers. However, HRI research has been often based on explicit measures (i.e. self-report questionnaires, a-posteriori interviews), while more implicit social cognitive processes that are elicited during the interaction with artificial agents took second place behind more qualitative and anecdotal results. The present work aims to demonstrate the usefulness of combining the systematic approach of cognitive neuroscience with HRI paradigms to further investigate social cognition processes evoked by artificial agents. Thus, this thesis aimed at exploring human sensitivity to anthropomorphic characteristics of a humanoid robot's (i.e. iCub robot) behavior, based on motion cues, under different conditions of prior knowledge. To meet this aim, we manipulated the human-likeness of the behaviors displayed by the robot and the explicitness of instructions provided to the participants, in both screen-based and real-time interaction scenarios. Furthermore, we explored some of the individual differences that affect general attitudes towards robots, and the attribution of human-likeness consequently

    Uncanny valley effect: A qualitative synthesis of empirical research to assess the suitability of using virtual faces in psychological research

    Get PDF
    Recently, virtual faces are often used as stimuli to replace traditional photographs in human face perception studies. However, despite being increasingly human-like and realistic, they still present flaws in their aspects that might elicit eerie feelings in the observers, known as the Uncanny Valley (UV) effect. The current systematic review offers a qualitative synthesis of empirical studies investigating observers' subjective experience with virtual compared to real faces to discuss the possible challenges that the UV effect poses when virtual faces are used as stimuli to study face perception. Results: revealed that virtual faces are judged eerier than real faces. Perception of uncanniness represents a challenge in face perception research as it has been associated with negative emotions and avoidance behaviors that might influence observers' responses to these stimuli. Also, observers perceive virtual faces as more deviating from familiar patterns than real faces. Lower perceptual familiarity might have several implications in face perception research, as virtual faces might be considered as a category of stimuli distinct from real faces and therefore processed less efficiently. In conclusion, our findings suggest that researchers should be cautious in using these stimuli to study face perception

    Is it the real deal? Perception of virtual characters versus humans: an affective cognitive neuroscience perspective

    Get PDF
    Recent developments in neuroimaging research support the increased use of naturalistic stimulus material such as film, animations, or androids. These stimuli allow for a better understanding of how the brain processes information in complex situations while maintaining experimental control. While avatars and androids are well suited to study human cognition, they should not be equated to human stimuli. For example, the Uncanny Valley hypothesis theorizes that artificial agents with high human-likeness may evoke feelings of eeriness in the human observer. Here we review if, when, and how the perception of human-like avatars and androids differs from the perception of humans and consider how this influences their utilization as stimulus material in social and affective neuroimaging studies. First, we discuss how the appearance of virtual characters affects perception. When stimuli are morphed across categories from non-human to human, the most ambiguous stimuli, rather than the most human-like stimuli, show prolonged classification times and increased eeriness. Human-like to human stimuli show a positive linear relationship with familiarity. Secondly, we show that expressions of emotions in human-like avatars can be perceived similarly to human emotions, with corresponding behavioral, physiological and neuronal activations, with exception of physical dissimilarities. Subsequently, we consider if and when one perceives differences in action representation by artificial agents versus humans. Motor resonance and predictive coding models may account for empirical findings, such as an interference effect on action for observed human-like, natural moving characters. However, the expansion of these models to explain more complex behavior, such as empathy, still needs to be investigated in more detail. Finally, we broaden our outlook to social interaction, where virtual reality stimuli can be utilized to imitate complex social situations

    Between Anthropomorphism, Trust, and the Uncanny Valley: a Dual-Processing Perspective on Perceived Trustworthiness and Its Mediating Effects on Use Intentions of Social Robots

    Get PDF
    Designing social robots with the aim to increase their acceptance is crucial for the success of their implementation. However, even though increasing anthropomorphism is often seen as a promising way to achieve this goal, the uncanny valley effect proposes that anthropomorphism can be detrimental to acceptance unless robots are almost indistinguishable from humans. Against this background, we use a dual processing theory approach to investigate whether an uncanny valley of perceived trustworthiness (PT) can be observed for social robots and how this effect differs between the intuitive and deliberate reasoning system. The results of an experiment with four conditions and 227 participants provide support for the uncanny valley effect. Furthermore, mediation analyses suggested that use intention decreases through both reduced intuitive and deliberate PT for medium levels of anthropomorphism. However, for high levels of anthropomorphism (indistinguishable from real human), only intuitive PT determined use intention. Consequently, our results indicate both advantages and pitfalls of anthropomorphic design

    ‘Give me a hug': the effects of touch and autonomy on people's responses to embodied social agents

    Get PDF
    Embodied social agents are programmed to display human-like social behaviour to increase intuitiveness of interacting with these agents. It is not yet clear to what extent people respond to agents’ social behaviours. One example is touch. Despite robots’ embodiment and increasing autonomy, the effect of communicative touch has been a mostly overlooked aspect of human-robot interaction. This video-based, 2x2 betweensubject survey experiment (N=119) found that the combination of touch and proactivity influenced whether people saw the robot as machine-like and dependable. Participants’ attitude towards robots in general also influenced perceived closeness between humans and robots. Results show that communicative touch is considered a more appropriate behaviour for proactive agents rather than reactive agents. Also, people that are generally more positive towards robots find robots that interact by touch less machine-like. These effects illustrate that careful consideration is necessary when incorporating social behaviours in agents’ physical interaction design

    Trusting Humans and Avatars: Behavioral and Neural Evidence

    Get PDF
    Over the past decade, information technology has dramatically changed the context in which economic transactions take place. Increasingly, transactions are computer-mediated, so that, relative to human-human interactions, human-computer interactions are gaining in relevance. Computer-mediated transactions, and in particular those related to the Internet, increase perceptions of uncertainty. Therefore, trust becomes a crucial factor in the reduction of these perceptions. To investigate this important construct, we studied individual trust behavior and the underlying brain mechanisms through a multi-round trust game. Participants acted in the role of an investor, playing against both humans and avatars. The behavioral results show that participants trusted avatars to a similar degree as they trusted humans. Participants also revealed similarity in learning an interaction partner’s trustworthiness, independent of whether the partner was human or avatar. However, the neuroimaging findings revealed differential responses within the brain network that is associated with theory of mind (mentalizing) depending on the interaction partner. Based on these results, the major conclusion of our study is that, in a situation of a computer with human-like characteristics (avatar), trust behavior in human-computer interaction resembles that of human-human interaction. On a deeper neurobiological level, our study reveals that thinking about an interaction partner’s trustworthiness activates the mentalizing network more strongly if the trustee is a human rather than an avatar. We discuss implications of these findings for future research

    Towards human-compatible autonomous car: A study of non-verbal Turing test in automated driving with affective transition modelling

    Full text link
    Autonomous cars are indispensable when humans go further down the hands-free route. Although existing literature highlights that the acceptance of the autonomous car will increase if it drives in a human-like manner, sparse research offers the naturalistic experience from a passenger's seat perspective to examine the human likeness of current autonomous cars. The present study tested whether the AI driver could create a human-like ride experience for passengers based on 69 participants' feedback in a real-road scenario. We designed a ride experience-based version of the non-verbal Turing test for automated driving. Participants rode in autonomous cars (driven by either human or AI drivers) as a passenger and judged whether the driver was human or AI. The AI driver failed to pass our test because passengers detected the AI driver above chance. In contrast, when the human driver drove the car, the passengers' judgement was around chance. We further investigated how human passengers ascribe humanness in our test. Based on Lewin's field theory, we advanced a computational model combining signal detection theory with pre-trained language models to predict passengers' humanness rating behaviour. We employed affective transition between pre-study baseline emotions and corresponding post-stage emotions as the signal strength of our model. Results showed that the passengers' ascription of humanness would increase with the greater affective transition. Our study suggested an important role of affective transition in passengers' ascription of humanness, which might become a future direction for autonomous driving.Comment: 16 pages, 9 figures, 3 table
    • 

    corecore