423 research outputs found

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect users’ self-perceptions, perceptions of the technology, how users interact with the technology, and the users’ performance. Examples include changes in a users’ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in users’ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the users’ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies

    Outline of a sensory-motor perspective on intrinsically moral agents

    Get PDF
    This is the accepted version of the following article: Christian Balkenius, Lola Cañamero, Philip PĂ€rnamets, Birger Johansson, Martin V Butz, and Andreas Olson, ‘Outline of a sensory-motor perspective on intrinsically moral agents’, Adaptive Behaviour, Vol 24(5): 306-319, October 2016, which has been published in final form at DOI: https://doi.org/10.1177/1059712316667203 Published by SAGE ©The Author(s) 2016We propose that moral behaviour of artificial agents could (and should) be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral (or social) emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame.Peer reviewedFinal Accepted Versio

    Bridging the gap between emotion and joint action

    Get PDF
    Our daily human life is filled with a myriad of joint action moments, be it children playing, adults working together (i.e., team sports), or strangers navigating through a crowd. Joint action brings individuals (and embodiment of their emotions) together, in space and in time. Yet little is known about how individual emotions propagate through embodied presence in a group, and how joint action changes individual emotion. In fact, the multi-agent component is largely missing from neuroscience-based approaches to emotion, and reversely joint action research has not found a way yet to include emotion as one of the key parameters to model socio-motor interaction. In this review, we first identify the gap and then stockpile evidence showing strong entanglement between emotion and acting together from various branches of sciences. We propose an integrative approach to bridge the gap, highlight five research avenues to do so in behavioral neuroscience and digital sciences, and address some of the key challenges in the area faced by modern societies

    Review of Research on Human Trust in Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) represents today\u27s most advanced technologies that aim to imitate human intelligence. Whether AI can successfully be integrated into society depends on whether it can gain users’ trust. We conduct a comprehensive review of recent research on human trust in AI and uncover the significant role of AI’s transparency, reliability, performance, and anthropomorphism in developing trust. We also review how trust is diversely built and calibrated, and how human and environmental factors affect human trust in AI. Based on the review, the most promising future research directions are proposed

    A Cross-Cultural Comparison on Implicit and Explicit Attitudes Towards Artificial Agents

    Get PDF
    Historically, there has been a great deal of confusion in the literature regarding cross-cultural differences in attitudes towards artificial agents and preferences for their physical appearance. Previous studies have almost exclusively assessed attitudes using self-report measures (i.e., questionnaires). In the present study, we sought to expand our knowledge on the influence of cultural background on explicit and implicit attitudes towards robots and avatars. Using the Negative Attitudes Towards Robots Scale and the Implicit Association Test in a Japanese and Dutch sample, we investigated the effect of culture and robots’ body types on explicit and implicit attitudes across two experiments (total n = 669). Partly overlapping with our hypothesis, we found that Japanese individuals had a more positive explicit attitude towards robots compared to Dutch individuals, but no evidence of such a difference was found at the implicit level. As predicted, the implicit preference towards humans was moderate in both cultural groups, but in contrast to what we expected, neither culture nor robot embodiment influenced this preference. These results suggest that only at the explicit but not implicit level, cultural differences appear in attitudes towards robots

    Autonomous Decision-Making based on Biological Adaptive Processes for Intelligent Social Robots

    Get PDF
    MenciĂłn Internacional en el tĂ­tulo de doctorThe unceasing development of autonomous robots in many different scenarios drives a new revolution to improve our quality of life. Recent advances in human-robot interaction and machine learning extend robots to social scenarios, where these systems pretend to assist humans in diverse tasks. Thus, social robots are nowadays becoming real in many applications like education, healthcare, entertainment, or assistance. Complex environments demand that social robots present adaptive mechanisms to overcome different situations and successfully execute their tasks. Thus, considering the previous ideas, making autonomous and appropriate decisions is essential to exhibit reasonable behaviour and operate well in dynamic scenarios. Decision-making systems provide artificial agents with the capacity of making decisions about how to behave depending on input information from the environment. In the last decades, human decision-making has served researchers as an inspiration to endow robots with similar deliberation. Especially in social robotics, where people expect to interact with machines with human-like capabilities, biologically inspired decisionmaking systems have demonstrated great potential and interest. Thereby, it is expected that these systems will continue providing a solid biological background and improve the naturalness of the human-robot interaction, usability, and the acceptance of social robots in the following years. This thesis presents a decision-making system for social robots acting in healthcare, entertainment, and assistance with autonomous behaviour. The system’s goal is to provide robots with natural and fluid human-robot interaction during the realisation of their tasks. The decision-making system integrates into an already existing software architecture with different modules that manage human-robot interaction, perception, or expressiveness. Inside this architecture, the decision-making system decides which behaviour the robot has to execute after evaluating information received from different modules in the architecture. These modules provide structured data about planned activities, perceptions, and artificial biological processes that evolve with time that are the basis for natural behaviour. The natural behaviour of the robot comes from the evolution of biological variables that emulate biological processes occurring in humans. We also propose a Motivational model, a module that emulates biological processes in humans for generating an artificial physiological and psychological state that influences the robot’s decision-making. These processes emulate the natural biological rhythms of the human organism to produce biologically inspired decisions that improve the naturalness exhibited by the robot during human-robot interactions. The robot’s decisions also depend on what the robot perceives from the environment, planned events listed in the robot’s agenda, and the unique features of the user interacting with the robot. The robot’s decisions depend on many internal and external factors that influence how the robot behaves. Users are the most critical stimuli the robot perceives since they are the cornerstone of interaction. Social robots have to focus on assisting people in their daily tasks, considering that each person has different features and preferences. Thus, a robot devised for social interaction has to adapt its decisions to people that aim at interacting with it. The first step towards adapting to different users is identifying the user it interacts with. Then, it has to gather as much information as possible and personalise the interaction. The information about each user has to be actively updated if necessary since outdated information may lead the user to refuse the robot. Considering these facts, this work tackles the user adaptation in three different ways. ‱ The robot incorporates user profiling methods to continuously gather information from the user using direct and indirect feedback methods. ‱ The robot has a Preference Learning System that predicts and adjusts the user’s preferences to the robot’s activities during the interaction. ‱ An Action-based Learning System grounded on Reinforcement Learning is introduced as the origin of motivated behaviour. The functionalities mentioned above define the inputs received by the decisionmaking system for adapting its behaviour. Our decision-making system has been designed for being integrated into different robotic platforms due to its flexibility and modularity. Finally, we carried out several experiments to evaluate the architecture’s functionalities during real human-robot interaction scenarios. In these experiments, we assessed: ‱ How to endow social robots with adaptive affective mechanisms to overcome interaction limitations. ‱ Active user profiling using face recognition and human-robot interaction. ‱ A Preference Learning System we designed to predict and adapt the user preferences towards the robot’s entertainment activities for adapting the interaction. ‱ A Behaviour-based Reinforcement Learning System that allows the robot to learn the effects of its actions to behave appropriately in each situation. ‱ The biologically inspired robot behaviour using emulated biological processes and how the robot creates social bonds with each user. ‱ The robot’s expressiveness in affect (emotion and mood) and autonomic functions such as heart rate or blinking frequency.Programa de Doctorado en IngenierĂ­a ElĂ©ctrica, ElectrĂłnica y AutomĂĄtica por la Universidad Carlos III de MadridPresidente: Richard J. Duro FernĂĄndez.- Secretaria: ConcepciĂłn Alicia Monje Micharet.- Vocal: Silvia Ross

    Consumer intention to use service robots: A cognitive–affective–conative framework

    Get PDF
    Purpose: Drawing on the cognitive–affective–conative framework, this study aims to develop a model of service robot acceptance in the hospitality sector by incorporating both cognitive evaluations and affective responses. Design/methodology/approach: A mixed-method approach combining qualitative and quantitative methods was used to develop measurement and test research hypotheses. Findings: The results show that five cognitive evaluations (i.e. cuteness, coolness, courtesy, utility and autonomy) significantly influence consumers’ positive affect, leading to customer acceptance intention. Four cognitive evaluations (cuteness, interactivity, courtesy and utility) significantly influence consumers’ negative affect, which in turn positively affects consumer acceptance intention. Practical implications: This study provides significant implications for the design and implementation of service robots in the hospitality and tourism sector. Originality/value: Different from traditional technology acceptance models, this study proposed a model based on the hierarchical relationships of cognition, affect and conation to enhance knowledge about human–robot interactions

    Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

    Get PDF
    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles
    • 

    corecore