19 research outputs found

    The distracted robot: what happens when artificial agents behave like us

    Get PDF
    In everyday life, we are frequently exposed to different smart technologies. From our smartphones to avatars in computer games, and soon perhaps humanoid robots, we are surrounded by artificial agents created to interact with us. Already during the design phase of an artificial agent, engineers often endow it with functions aimed to promote the interaction and engagement with it, ranging from its \u201ccommunicative\u201d abilities to the movements it produces. Still, whether an artificial agent that can behave like a human could boost the spontaneity and naturalness of interaction is still an open question. Even during the interaction with conspecifics, humans rely partially on motion cues when they need to infer the mental states underpinning behavior. Similar processes may be activated during the interaction with embodied artificial agents, such as humanoid robots. At the same time, a humanoid robot that can faithfully reproduce human-like behavior may undermine the interaction, causing a shift in attribution: from being endearing to being uncanny. Furthermore, it is still not clear whether individual biases and prior knowledge related to artificial agents can override perceptual evidence of human-like traits. A relatively new area of research emerged in the context of investigating individuals\u2019 reactions towards robots, widely referred to as Human-Robot Interaction (HRI). HRI is a multidisciplinary community that comprises psychologists, neuroscientists, philosophers as well as roboticists, and engineers. However, HRI research has been often based on explicit measures (i.e. self-report questionnaires, a-posteriori interviews), while more implicit social cognitive processes that are elicited during the interaction with artificial agents took second place behind more qualitative and anecdotal results. The present work aims to demonstrate the usefulness of combining the systematic approach of cognitive neuroscience with HRI paradigms to further investigate social cognition processes evoked by artificial agents. Thus, this thesis aimed at exploring human sensitivity to anthropomorphic characteristics of a humanoid robot's (i.e. iCub robot) behavior, based on motion cues, under different conditions of prior knowledge. To meet this aim, we manipulated the human-likeness of the behaviors displayed by the robot and the explicitness of instructions provided to the participants, in both screen-based and real-time interaction scenarios. Furthermore, we explored some of the individual differences that affect general attitudes towards robots, and the attribution of human-likeness consequently

    Infants' understanding of the epistemic nature of eye gaze during the second year of life

    Get PDF
    The current thesis explored infants' implicit understanding of mental states during the second year of life. The first paper focused on infants' appreciation of the relationship between visual perception and knowledge. Based on an interactive search task, 24-month-olds demonstrated an understanding that people's eyes need to be unobstructed in order for them to be connected to the external world. Using a preferential looking paradigm, 18-month-olds predicted different behavior as a function of a person's visual experience. The second paper employed the preferential looking paradigm to investigate 18-month-olds' attributions of knowledge or ignorance when looking behavior was displayed by a person or a humanoid robot. Infants predicted different behavior as a function of the person's visual experience, while they did not demonstrate this expectation in the robot condition. The third paper explored infants' understanding of the epistemic nature of eye gaze within the context of a word learning task (Baldwin, 1993). In three experiments 18-month-olds were exposed to either a human or robot speaker who uttered novel labels for unfamiliar objects under two different eye gaze conditions. While infants followed the eye gaze of the non-human speaker, they did not use the robot speaker's eye gaze cues to determine the correct referent of novel words, even when contingent interaction was added. When the speaker was human, infants used the speaker's eye gaze to determine the correct referent. Together, the findings from the studies presented in this dissertation suggest that by 18 months, infants possess an implicit appreciation of the relationship between visual perception and knowledge. The results also provide evidence for the notion that by 18 months, the scope of infants' concept of mentalistic agent has narrowed relative to that demonstrated by younger infant

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect users’ self-perceptions, perceptions of the technology, how users interact with the technology, and the users’ performance. Examples include changes in a users’ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in users’ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the users’ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies

    The Influence of Acute Stress on the Perception of Robot Emotional Body Language: Implications for Robot Design in Healthcare and Other High-Risk Domains

    Get PDF
    University of Minnesota Ph.D. dissertation. July 2017. Major: Human Factors/Ergonomics. Advisors: Kathleen Harder, Wilma Koutstaal. 1 computer file (PDF); viii, 131 pages.In coming years, emotionally expressive social robots will permeate many facets of our lives. Yet, although researchers have explored robot design parameters that may facilitate human-robot interaction, remarkably little attention has been paid to the human perceptual and other psychological factors that may impact human ability to engage with robots. In high-risk settings, such as healthcare—where the use of robots is expected to increase markedly—it is paramount to understand the influence of a patient’s stress level, temperament, and attitudes towards robots as negative interactions could harm a patient’s experience and hinder recovery. Using a novel between-subject paradigm, we investigated how the experimental induction of acute physiological and cognitive stress versus low stress influences perception of normed robot emotional body language as conveyed by a physically-present versus virtual reality generated robot. Following high or low stress induction, participants were asked to rate the valence (negative/unhappy to positive/happy) and level of arousal (calm/relaxed to animated/excited) conveyed by poses in five emotional categories: negative valence-high arousal, negative valence-low arousal, neutral, positive valence-low arousal, positive valence-high arousal. Poses from the categories were randomly intermixed and each pose was presented two or three times. Ratings were then correlated with temperament (as assessed by the Adult Temperament Questionnaire), attitudes towards and experience with robots (a new questionnaire that included measures from the Godspeed Scales and Negative Attitudes about Robots Survey), and chronic stress. The acute stress induction especially influenced the evaluation of high arousal poses – both negative and positive – with both valence and arousal rated lower under high than low stress. Repeated presentation impacted perception of low arousal (negative and positive) and neutral poses, with increases in perceived valence and arousal for later presentations. There were also effects of robot type specifically for positively-valenced emotions, such that these poses were rated as more positive for the physically-present than virtually-instantiated robot. Temperament was found to relate to emotional robot body language. Trait positive affect was associated with higher valence ratings for positive and neutral poses. Trait negative affect was correlated with higher arousal ratings for negative valence-low arousal poses. Subcategories within the robot attitudes questionnaire were correlated with emotional robot poses and temperament. To our knowledge this dissertation is the first exploration of the effects of acute and chronic stress on human perception of robot emotional body language, with implications for robot design, both physical and virtual. Given the largely parallel findings that we observed for the poses presented by the physically-present versus virtually-instantiated robot, it is proposed that the use of virtual reality may provide a viable "sandbox" tool for more efficiently and thoroughly experimenting with possible robot designs, and variants in their emotional expressiveness. Broader psychological, physiological, and other factors that designers should consider as they create robots for high-risk applications are also discussed

    The Irresistible Animacy of Lively Artefacts

    Get PDF
    This thesis explores the perception of ‘liveliness’, or ‘animacy’, in robotically driven artefacts. This perception is irresistible, pervasive, aesthetically potent and poorly understood. I argue that the Cartesian rationalist tendencies of robotic and artificial intelligence research cultures, and associated cognitivist theories of mind, fail to acknowledge the perceptual and instinctual emotional affects that lively artefacts elicit. The thesis examines how we see artefacts with particular qualities of motion to be alive, and asks what notions of cognition can explain these perceptions. ‘Irresistible Animacy’ is our human tendency to be drawn to the primitive and strangely thrilling nature of experiencing lively artefacts. I have two research methodologies; one is interdisciplinary scholarship and the other is my artistic practice of building lively artefacts. I have developed an approach that draws on first-order cybernetics’ central animating principle of feedback-control, and second-order cybernetics’ concerns with cognition. The foundations of this approach are based upon practices of machine making to embody and perform animate behaviour, both as scientific and artistic pursuits. These have inspired embodied, embedded, enactive, and extended notions of cognition. I have developed an understanding using a theoretical framework, drawing upon literature on visual perception, behavioural and social psychology, puppetry, animation, cybernetics, robotics, interaction and aesthetics. I take as a starting point, the understanding that the visual cortex of the vertebrate eye includes active feature-detection for animate agents in our environment, and actively constructs the causal and social structure of this environment. I suggest perceptual ambiguity is at the centre of all animated art forms. Ambiguity encourages natural curiosity and interactive participation. It also elicits complex visceral qualities of presence and the uncanny. In the making of my own Lively Artefacts, I demonstrate a series of different approaches including the use of abstraction, artificial life algorithms, and reactive techniques

    Towards an understanding of humanoid robots in eLC applications

    Get PDF

    Children's perception and interpretation of robots and robot behaviour

    Get PDF
    The world of robotics, like that of all technology is changing rapidly (Melson, et al., 2009). As part of an inter-disciplinary project investigating the emergence of artificial culture in robot societies, this study set out to examine children’s perception of robots and interpretation of robot behaviour. This thesis is situated in an interdisciplinary field of human–robot interactions, drawing on research from the disciplines of sociology and psychology as well as the fields of engineering and ethics. The study was divided into four phases: phase one involved children from two primary schools drawing a picture and writing a story about their robot. In phase two, children observed e-puck robots interacting. Children were asked questions regarding the function and purpose of the robots’ actions. Phase three entailed data collection at a public event: Manchester Science Festival. Three activities at the festival: ‘XRay Art Under Your Skin’, ‘Swarm Robots’ and ‘Build-a-Bugbot’ formed the focus of this phase. In the first activity, children were asked to draw the components of a robot and were then asked questions about their drawings. During the second exercise, children’s comments were noted as they watched e-puck robot demonstrations. In the third exercise, children were shown images and asked whether these images were a robot or a ‘no-bot’. They were then prompted to provide explanations for their answers. Phase 4 of the research involved children identifying patterns of behaviour amongst e-pucks. This phase of the project was undertaken as a pilot for the ‘open science’ approach to research to be used by the wider project within which this PhD was nested. Consistent with existing literature, children endowed robots with animate and inanimate characteristics holding multiple understandings of robots simultaneously. The notion of control appeared to be important in children’s conception of animacy. The results indicated children’s perceptions of the location of the locus of control plays an important role in whether they view robots as autonomous agents or controllable entities. The ways in which children perceive robots and robot behaviour, in particular the ways in which children give meaning to robots and robot behaviour will potentially come to characterise a particular generation. Therefore, research should not only concentrate on the impact of these technologies on children but should focus on capturing children’s perceptions and viewpoints to better understand the impact of the changing technological world on the lives of children

    New approaches to the emerging social neuroscience of human-robot interaction

    Get PDF
    Prehistoric art, like the Venus of Willendorf sculpture, shows that we have always looked for ways to distil fundamental human characteristics and capture them in physically embodied representations of the self. Recently, this undertaking has gained new momentum through the introduction of robots that resemble humans in their shape and their behaviour. These social robots are envisioned to take on important roles: alleviate loneliness, support vulnerable children and serve as helpful companions for the elderly. However, to date, few commercially available social robots are living up to these expectations. Given their importance for an ever older and more socially isolated society, rigorous research at the intersection of psychology, social neuroscience and human-robot interaction is needed to determine to which extent mechanisms active during human-human interaction can be co-opted when we encounter social robots. This thesis takes an anthropocentric approach to answering the question how socially motivated we are to interact with humanoid robots. Across three empirical and one theoretical chapter, I use self-report, behavioural and neural measures relevant to the study of interactions with robots to address this question. With the Social Motivation Theory of Autism as a point of departure, the first empirical chapter (Chapter 3) investigates the relevance of interpersonal synchrony for human-robot interaction. This chapter reports a null effect: participants did not find a robot that synchronised its movement with them on a drawing task more likeable, nor were they more motivated to ask it more questions in a semi-structured interaction scenario. As this chapter heavily relies on self-report as a main outcome measure, Chapter 4 addresses this limitation by adapting an established behavioural paradigm for the study of human-robot interaction. This chapter shows that a failure to conceptually extend an effect in the field of social attentional capture calls for a different approach when seeking to adapt paradigms for HRI. Chapter 5 serves as a moment of reflection on the current state-of-the-art research at the intersection of neuroscience and human-robot interaction. Here, I argue that the future of HRI research will rely on interaction studies with mobile brain imaging systems (like functional near-infrared spectroscopy) that allow data collection during embodied encounters with social robots. However, going forward, the field should slowly and carefully move outside of the lab and into real situations with robots. As the previous chapters have established, well-known effects have to be replicated before they are implemented for robots, and before they are taken out of the lab, into real life. The final empirical chapter (Chapter 6), takes the first step of this proposed slow approach: in addition to establishing the detection rate of a mobile fNIRS system in comparison to fMRI, this chapter contributes a novel way to digitising optode positions by means of photogrammetry. In the final chapter of this thesis, I highlight the main lessons learned conducting studies with social robots. I propose an updated roadmap which takes into account the problems raised in this thesis and emphasise the importance of incorporating more open science practices going forward. Various tools that emerged out of the open science movement will be invaluable for researchers working on this exciting, interdisciplinary endeavour

    The mentalizing triangle: how interactions among self, other and object prompt mentalizing

    Get PDF
    To smoothly interact with other people requires individuals to generate appropriate responses based on other’s mental states. The ability we rely on is termed mentalizing. As humans it seems that we are endowed with the abilities to rapidly process other’s mental states, either by taking their perspectives or using mindreading skills. These abilities allow us to go beyond our direct experience of reality and to see or infer some of the contents of another’s mental world. Due to the complexity of social contexts, our mentalizing system needs to address a variety of challenges which put different requirements on either time or flexibility. During years of research, investigators have come up with various theories to explain how we cope with these challenges. Among them, the two-system account raised up by Apperly and colleagues (2010) has been favoured by many studies. Concisely, the two-system account claims that we have a fast-initiated mentalizing system which guarantees us to make quick judgments with limited cognitive resource; and a flexible system which allows deliberate thinking and enables mentalizing to generalize to multiple targets. Such a framework provides good explanations to debates such as whether preverbal young children can process mentalizing or not. But it is still largely unknown how healthy adults engage in mentalizing in everyday life. Specifically, why it seems easier for some targets to activate our mentalizing system, but with some others, we frequently fail to consider their perspectives or beliefs? To give an explanation to this question, I adopted a different research orientation in my PhD from the two-system account, which considers the dynamic interactions among three key elements in mentalizing: the self, agent(s), and object(s). I put forward a mentalizing triangle model and assume the interactions in these triadic relationships act as gateways triggering mentalizing. Thus, with some agents, we feel more intimate with them, which makes it easier for us to think about their minds. Similarly, in certain context, the agent may have frequent interactions with the object, thus we become more motivated to engage in mentalizing. In the following chapters, I first reviewed current literatures and illustrate evidence that could support or oppose the triangle model, then examined these triangle hypotheses both from behavioural and neuroimaging levels. In Study 1, I first measured mentalizing in the baseline condition where no interaction in the triangle relationships was provided. By adapting the false belief paradigm used by Kovacs, Teglas, & Endress (2010), I imported the Signal Detection theory to obtain more indices which could reflect participants mentalizing processes. Results of this study showed that people have a weak tendency to ascribe other’s beliefs when there is no interaction. Then, in Study 2, we added another condition which included the ‘agent-object’ interaction factor while using a similar paradigm in Study 1. Results in the noninteractiond condition replicated our findings of Study 1, but adding ‘agent-object’ interactions didn’t boost mentalizing. Study 3 and 4 tested the ‘self-agent’ interaction hypothesis in visual perspective taking (VPT), another basic mentalizing ability. In Study 3, I adopted virtual reality approach and for the first time investigated how people select which perspective to take when exposed to multiple conflicting perspectives. Importantly, I examined whether the propensity to engage in VPT is correlated with how we perceive other people as humans, i.e. the humanization process. Congruent with our hypotheses, participant exhibited stronger propensity to take a more humanised agent’s perspective. Then in Study 4, I used functional near-infrared spectroscopy (fNIRS) and investigated the neural mechanism underlying this finding. In general, the ‘selfagent’ hypothesis in the mentalizing triangle model was supported but not for the ‘agentobject’ hypothesis, which we consider may due to several approach limitations. The findings in this thesis are derived from applying novel approaches to classic experimental paradigms, and have shown the potentials of using new techniques, such as VR and fNIRS, in investigating the philosophical question of mentalizing. It also enlights social cognitive studies by considering classic psychological methods such as the Signal Detection Theory in future research

    At the fringes of normality – a neurocognitive model of the uncanny valley on the detection and negative evaluation of deviations

    Get PDF
    Information violating preconceived patterns tend to be disliked. The term “uncanny valley” is used to described such negative reactions towards near humanlike artificial agents as a nonlinear function of human likeness and likability. My work proposes and investigates a new neurocognitive theory of the uncanny valley and uncanniness effects within various categories. According to this refined theory of the uncanny valley, the degree of perceptual specialization increases the sensitivity to anomalies or deviations in a stimulus, which leads to a greater relative negative evaluation. As perceptual specialization is observed for many human-related stimuli (e.g., faces, voices, bodies, biological motion) attempts to replicate artificial human entities may lead to design errors which would be especially apparent due to a higher level of specialization, leading to the uncanny valley. The refined theory is established and investigated throughout 10 chapters. In Chapters 2 to 4, the correlative (Chapters 2 and 3) and causal (Chapter 4) association between perceptual specialization, sensitivity to deviations, and uncanniness are observed. In Chapters 5 to 6, the refined theory is applied to inanimate object categories to validate its relevance in stimulus categories beyond those associated with the uncanny valley, specifically written text (Chapter 5) and physical places (Chapter 6). Chapters 7 to 10 critically investigate multiple explanations on the uncanny valley, including the refined theory. Chapter 11 applies the refined theory onto ecologically valid stimuli of the uncanny valley, namely an android’s dynamic emotional expressions. Finally, Chapter 12 summarized and discusses the findings and evaluates the refined theory of the uncanny based on its advantages and disadvantages. With this work, I hope to present substantial arguments for an alternative, refined theory of the uncanny that can more accurately explain a wider range of observation compared to the uncanny valley
    corecore