232 research outputs found

    Robot relationships within communal/exchange service contexts: working paper

    Get PDF
    Emergent technologies are rapidly transforming the nature of services and service experiences. One particular area predicted to have a significant impact on these is the integration of robots into service systems. However, extant literature on service provider-user encounters and their consequential relationships implicitly assumes that the key social agents involved are primarily human. This proposed research will address this gap by investigating the extent to which robot anthropomorphization/animacy influences user perceptions of competence/professionalism and/or social cognition. It considers the impact of these on provider-user relational trust within contrasting service contexts. Specifically, using an innovative methodological approach, it will examine the extent to which ‘communal’ and ‘exchange’ contexts are influential on relational development intention and the type of relationship sought by service users

    Consumer intention to use service robots: A cognitive–affective–conative framework

    Get PDF
    Purpose: Drawing on the cognitive–affective–conative framework, this study aims to develop a model of service robot acceptance in the hospitality sector by incorporating both cognitive evaluations and affective responses. Design/methodology/approach: A mixed-method approach combining qualitative and quantitative methods was used to develop measurement and test research hypotheses. Findings: The results show that five cognitive evaluations (i.e. cuteness, coolness, courtesy, utility and autonomy) significantly influence consumers’ positive affect, leading to customer acceptance intention. Four cognitive evaluations (cuteness, interactivity, courtesy and utility) significantly influence consumers’ negative affect, which in turn positively affects consumer acceptance intention. Practical implications: This study provides significant implications for the design and implementation of service robots in the hospitality and tourism sector. Originality/value: Different from traditional technology acceptance models, this study proposed a model based on the hierarchical relationships of cognition, affect and conation to enhance knowledge about human–robot interactions

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect users’ self-perceptions, perceptions of the technology, how users interact with the technology, and the users’ performance. Examples include changes in a users’ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in users’ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the users’ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies

    Emotional design and human-robot interaction

    Get PDF
    Recent years have shown an increase in the importance of emotions applied to the Design field - Emotional Design. In this sense, the emotional design aims to elicit (e.g., pleasure) or prevent (e.g., displeasure) determined emotions, during human product interaction. That is, the emotional design regulates the emotional interaction between the individual and the product (e.g., robot). Robot design has been a growing area whereby robots are interacting directly with humans in which emotions are essential in the interaction. Therefore, this paper aims, through a non-systematic literature review, to explore the application of emotional design, particularly on Human-Robot Interaction. Robot design features (e.g., appearance, expressing emotions and spatial distance) that affect emotional design are introduced. The chapter ends with a discussion and a conclusion.info:eu-repo/semantics/acceptedVersio

    Would You Obey an Aggressive Robot: A Human-Robot Interaction Field Study

    Full text link
    © 2018 IEEE. Social Robots have the potential to be of tremendous utility in healthcare, search and rescue, surveillance, transport, and military applications. In many of these applications, social robots need to advise and direct humans to follow important instructions. In this paper, we present the results of a Human-Robot Interaction field experiment conducted using a PR2 robot to explore key factors involved in obedience of humans to social robots. This paper focuses on studying how the human degree of obedience to a robot's instructions is related to the perceived aggression and authority of the robot's behavior. We implemented several social cues to exhibit and convey both authority and aggressiveness in the robot's behavior. In addition to this, we also analyzed the impact of other factors such as perceived anthropomorphism, safety, intelligence and responsibility of the robot's behavior on participants' compliance with the robot's instructions. The results suggest that the degree of perceived aggression in the robot's behavior by different participants did not have a significant impact on their decision to follow the robot's instruction. We have provided possible explanations for our findings and identified new research questions that will help to understand the role of robot authority in human-robot interaction, and that can help to guide the design of robots that are required to provide advice and instructions

    Conversational AI Agents: Investigating AI-Specific Characteristics that Induce Anthropomorphism and Trust in Human-AI Interaction

    Get PDF
    The investment in AI agents has steadily increased over the past few years, yet the adoption of these agents has been uneven. Industry reports show that the majority of people do not trust AI agents with important tasks. While the existing IS theories explain users’ trust in IT artifacts, several new studies have raised doubts about the applicability of current theories in the context of AI agents. At first glance, an AI agent might seem like any other technological artifact. However, a more in-depth assessment exposes some fundamental characteristics that make AI agents different from previous IT artifacts. The aim of this dissertation, therefore, is to identify the AI-specific characteristics and behaviors that hinder and contribute to trust and distrust, thereby shaping users’ behavior in human-AI interaction. Using a custom-developed conversational AI agent, this dissertation extends the human-AI literature by introducing and empirically testing six new constructs, namely, AI indeterminacy, task fulfillment indeterminacy, verbal indeterminacy, AI inheritability, AI trainability, and AI freewill

    Artificial Intelligence and Robotics in Marketing

    Get PDF

    Artificial Intelligence and Robotics in Marketing

    Get PDF
    This chapter illustrates the role of artificial intelligence (AI) and robotics in marketing and will help managers develop a deeper understanding of its potential to revolutionize the service experience. We summarize the use of AI and robots in practice and show that the adoption of AI predominantly occurs at the task level rather than the job level, implying that AI takes over some tasks that are part of a job and not the entire job. Based on these insights, we discuss opportunities and drawbacks of AI and robots and reflect on whether service robots will complement or substitute human employees. Moreover, we explain why many consumers are still reluctant to engage with these new technologies and which conditions should be met in order to benefit from using service robots

    Artificial Intelligence and Robotics in Marketing

    Get PDF
    • …
    corecore