297 research outputs found

    Can Robots Earn Our Trust the Same Way Humans Do?

    Get PDF
    Robots increasingly act as our social counterparts in domains such as healthcare and retail. For these human-robot interactions (HRI) to be effective, a question arises on whether we trust robots the same way we trust humans. We investigated whether the determinants competence and warmth, known to influence interpersonal trust development, influence trust development in HRI, and what role anthropomorphism plays in this interrelation. In two online studies with 2 × 2 between-subjects design, we investigated the role of robot competence (Study 1) and robot warmth (Study 2) in trust development in HRI. Each study explored the role of robot anthropomorphism in the respective interrelation. Videos showing an HRI were used for manipulations of robot competence (through varying gameplay competence) and robot anthropomorphism (through verbal and non-verbal design cues and the robot's presentation within the study introduction) in Study 1 (n = 155) as well as robot warmth (through varying compatibility of intentions with the human player) and robot anthropomorphism (same as Study 1) in Study 2 (n = 157). Results show a positive effect of robot competence (Study 1) and robot warmth (Study 2) on trust development in robots regarding anticipated trust and attributed trustworthiness. Subjective perceptions of competence (Study 1) and warmth (Study 2) mediated the interrelations in question. Considering applied manipulations, robot anthropomorphism neither moderated interrelations of robot competence and trust (Study 1) nor robot warmth and trust (Study 2). Considering subjective perceptions, perceived anthropomorphism moderated the effect of perceived competence (Study 1) and perceived warmth (Study 2) on trust on an attributional level. Overall results support the importance of robot competence and warmth for trust development in HRI and imply transferability regarding determinants of trust development in interpersonal interaction to HRI. Results indicate a possible role of perceived anthropomorphism in these interrelations and support a combined consideration of these variables in future studies. Insights deepen the understanding of key variables and their interaction in trust dynamics in HRI and suggest possibly relevant design factors to enable appropriate trust levels and a resulting desirable HRI. Methodological and conceptual limitations underline benefits of a rather robot-specific approach for future research

    Do Users Really Want “Human-like” AI? The Effects of Anthropomorphism and Ego-morphism on User’s Perceived Anthropocentric Threat

    Get PDF
    This paper aims to explore the development of a perceived anthropocentric threat (PAT) arising from the advancement of AI-based assistants (AIAs) beyond human capabilities. We highlight that while anthropomorphism offers valuable insights into human-AI interaction, it provides an incomplete understanding of advanced AIAs. To address this, we introduce the concept of ego-morphism, which emphasizes AIA’s unique behavior and attributes, shifting the focus away from mere human resemblances. Building upon prior research on anthropocentrism (belief that the humans are the center of the universe), we define PAT in the context of AI’s intelligence, autonomy, and ethical aspects. The study results reveal that when users perceive AIA as possessing its own ego, they are more likely to perceive PAT, particularly in cases where AIAs violate ethical values. The findings unveil new insights into the black box phenomenon through the lens of ego-morphism and its association with PAT. These findings show that individuals favor AIAs resembling humans as long as they exhibit human-like understanding of values and norms

    Investigating the relationship between AI and trust in human-AI collaboration

    Get PDF
    With the increasing development of information technology, the implementation of artificial intelligence (AI) has been widespread and has empowered virtual team collaboration by increasing collaboration efficiency and achieving superior collaboration results in recent years. Trust in the process of human-AI interaction has been identified as a challenge for team collaboration in this context. However, little research has investigated the relationship between human-AI interaction and trust. This study proposes a theoretical model of the relationship between human-AI interaction and team members’ trust during collaboration processes. We conclude that team members’ cognitive and emotional perceptions during the interaction process are associated with their trust towards AI. Moreover, the relationship could also be moderated by the specific AI implementation traits. Our model provides a holistic view of human-AI interaction and its association with team members’ trust in the context of team collaboration

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect users’ self-perceptions, perceptions of the technology, how users interact with the technology, and the users’ performance. Examples include changes in a users’ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in users’ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the users’ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies

    Almost human, but not really

    Get PDF
    Technologies become increasingly present in people’s daily lives and oftentimes adopt the role of social counterparts. People have conversations with their smart voice assistants and social robots assist with the household or even look after their users’ mental and physical health. Thus, the human-technology relationship often resembles interpersonal relationships in several ways. While research has implied that the human-technology relationship can adopt a social character, it needs to be clarified in what ways and regarding which variables the human-technology relationship and interpersonal relationships are comparable. Moreover, the question arises to what extent interaction with technology can address users’ social needs similar to a human counterpart and therefore possibly even affect interpersonal interaction. In this, the role of technology anthropomorphism, that is, the attribution of humanlike qualities to non-human agents or objects needs to be specified. This thesis is dedicated to the relevance of the human-technology relationship for interpersonal relationships with a focus on social needs. In the frame of this overarching research aim, the studies included in this thesis focus on the dynamics of the human-technology relationship and their comparability to interpersonal relationships (RQ1), the potential of human-technology interaction to address users’ social needs or substitute their fulfillment through interpersonal interaction (RQ2) as well as the role of technology anthropomorphism regarding these relationships (RQ3). First, focusing on trust, which is integral for the relationship with a technology that is experienced as a counterpart, two consecutive experimental studies (study 1.1/1.2) were conducted. Based on a human-robot interaction, they explored trust development in the human-technology relationship as well as to what extent determinants known to affect interpersonal trust development are transferable. Moreover, they focused on the role of technology anthropomorphism in this relationship. In this, a positive effect of technology competence, that is, its ability to achieve intended goals (study 1.1), as well as technology warmth, that is, its adherence to the same intentions and interests as the trustor (study 1.2), on trust in the technology emerged. Thus, relevant determinants for trust development in the human-technology relationship were highlighted, also implying a transferability of essential dynamics of trust development from interpersonal relationships. Furthermore, perceived technology anthropomorphism appeared to affect the positive interrelation of perceived technology competence and trust in the technology (study 1.1) as well as the interrelation of perceived technology warmth and trust in the technology (study 1.2). These insights support a relevance of perceived technology anthropomorphism in trust dynamics within the human-technology relationship, but also in the transferability of corresponding dynamics from interpersonal relationships. Similarly, in another study (study 2) the transferability of dynamics was explored for the variable of social connectedness, also key for relationship development and potentially relevant for the effect of interaction with technology on users’ social needs. Therefore, a two-week human-technology interaction with a conversational chatbot was investigated. In this, possibly relevant characteristics of the technology, such as its perception as anthropomorphic or socially present, and the user, for example, the individual tendency to anthropomorphize or the individual need to belong, were focused. Moreover, a possible effect of social connectedness to the technology on the desire to socialize with other humans was explored. As findings showed that duration and intensity of participants' interaction with the technology throughout the two-week study-period positively predicted felt social connectedness to the technology, similarities to dynamics of interpersonal relationship development were highlighted. Furthermore, the relevance of technology anthropomorphism in the development of a human-technology relationship as well as its comparability to dynamics of interpersonal relationships was underlined. Namely, the more intense individuals interacted with the technology, the more anthropomorphic they perceived it, and therefore felt more socially connected to it. Similarly, the longer and more intense individuals interacted with the technology, the more socially present they perceived it, and in turn felt more socially connected to it. While contrary to expectations, no interrelation between the felt social connectedness to the technology and the desire to socialize with other humans emerged, this relationship was explored further within studies 3.1, 3.2 and 4. Two consecutive experimental studies (study 3.1/3.2) explored the potential of anthropomorphic technologies to fulfill social needs as well as how individually perceived anthropomorphism correlates to these needs. While in both studies social exclusion and technology anthropomorphism were manipulated, we applied a different manipulation of anthropomorphism for each study. Whereas in one study (study 3.1) participants answered anthropomorphic (vs. non-anthropomorphic) questions regarding their own smartphone, in the other study (study 3.2) they were confronted with smartphone designs with anthropomorphic (vs. non-anthropomorphic) design cues. In both studies, no effects of anthropomorphism and social exclusion on behavioral intention or willingness to socialize were found. Yet, study 3.1 showed a positive correlation between willingness to socialize and perceived technology anthropomorphism. Results of study 3.2 further supported this relationship and additionally showed that this relationship was particularly strong for individuals with a high tendency to anthropomorphize, when the technology came with anthropomorphic design cues regarding its appearance. Thus, findings imply a relationship between social needs and anthropomorphism and further hint at a relevance of individual and contextual strengthening factors. To complement these findings and foster a deeper understanding of the human-technology relationship as well as its potential to address users’ social needs, a qualitative interview study was conducted (study 4). Findings highlight a potential of anthropomorphic technologies to address users’ social needs in certain ways, but also underline essential differences between the quality of human-technology interaction and interpersonal interaction. Examples are the technology’s missing reactions in interaction with the user on a content, physical, and emotional level as well as the absence of satisfaction of users’ social needs through interaction with technology. Additionally, insights hint at a social desirability bias, as interaction with technology that resembles interpersonal interaction appears to often be subject to rather negative reactions by third parties. After an overview of the empirical studies included in this thesis and their brief summaries, their research contribution is discussed. This is followed by an elaboration of overall theoretical and practical implications of this thesis. Theoretical implications focus on how this work contributes to but also extends theoretical and empirical work in the frame of the “computers are social actors” paradigm and particularly highlights the role of technology anthropomorphism as a phenomenon in this regard. Beyond the exploration of a social character of the human-technology relationship, this thesis offers insights on the potential of the human-technology relationship to address users’ social needs to an extent that interpersonal relationships can be affected. Implications for practitioners involve insights on design examples to support the development of essential determinants of the human-technology relationship. They also offer a more abstract invitation to reflect on the design and application contexts of technologies to foster a responsible handling with technology in peoples’ daily lives. Finally, the thesis concludes with a discussion of general limitations and directions for future research.Technologien werden zunehmend prĂ€sent im Alltag der Menschen und nehmen hĂ€ufig die Rolle eines sozialen GegenĂŒbers ein. Menschen unterhalten sich mit ihren technischen Sprachassistenten und soziale Roboter unterstĂŒtzen im Haushalt und kĂŒmmern sich sogar um das psychische und physische Wohlbefinden ihrer Nutzer und Nutzerinnen. Entsprechend Ă€hnelt die Mensch-Technik Beziehung in verschiedenen Aspekten hĂ€ufig zwischenmenschlichen Beziehungen. Im Einklang damit spricht bisherige Forschung dafĂŒr, dass die Mensch-Technik Beziehung einen sozialen Charakter annehmen kann. Es gilt jedoch zu erforschen, auf welche Art und Weise und in Bezug auf welche Variablen die Mensch-Technik Beziehung und zwischenmenschliche Beziehungen vergleichbar sind. DarĂŒber hinaus stellt sich die Frage, inwiefern durch Interaktion mit Technik soziale BedĂŒrfnisse der Nutzer und Nutzerinnen auf eine Ă€hnliche Art und Weise adressiert werden können wie durch die Interaktion mit einem anderen Menschen, und infolgedessen möglicherweise ein Effekt auf zwischenmenschliche Interaktion entstehen kann. Dabei gilt es zu spezifizieren, welche Rolle Anthropomorphismus, das heißt, die Zuschreibung menschenĂ€hnlicher QualitĂ€ten in Bezug auf nicht-menschliche Agenten oder Objekte, spielt. Die vorliegende Dissertation widmet sich der Relevanz der Mensch-Technik Beziehung fĂŒr zwischenmenschliche Beziehungen, mit einem Fokus auf soziale BedĂŒrfnisse. Im Rahmen dieses ĂŒbergreifenden Forschungsvorhabens erforschen die Studien dieser Arbeit die Dynamiken der Mensch-Technik Beziehung und deren Vergleichbarkeit mit zwischenmenschlichen Beziehungen (Forschungsfrage 1), das Potential der Mensch-Technik Interaktion, soziale BedĂŒrfnisse der Nutzer und Nutzerinnen zu adressieren oder die Befriedigung dieser durch zwischenmenschliche Interaktion zu substituieren (Forschungsfrage 2) sowie die Rolle des Anthropomorphismus von Technik in Bezug auf diese ZusammenhĂ€nge (Forschungsfrage 3). In zwei konsekutiven, experimentellen Studien (Studie 1.1/1.2) wurde Vertrauen in der Mensch-Technik Beziehung als essentielle Grundlage einer Beziehung zu einer Technik, die als GegenĂŒber wahrgenommen wird, fokussiert. Mittels einer Mensch-Roboter Interaktion wurde die Entwicklung von Vertrauen in der Mensch-Technik Beziehung untersucht. Dabei wurde erforscht, inwiefern Determinanten, welche die Entwicklung von zwischenmenschlichem Vertrauen beeinflussen können, auf die Mensch-Technik Beziehung ĂŒbertragbar sind. DarĂŒber hinaus wurde die Rolle des Anthropomorphismus von Technik untersucht. Es zeigte sich ein positiver Effekt der Kompetenz der Technik, das heißt der FĂ€higkeit, beabsichtigte Ziele zu erreichen (Studie 1.1), und der WĂ€rme der Technik, das heißt des Verfolgens der gleichen Intentionen und Interessen wie jeweilige Nutzer und Nutzerinnen (Studie 1.2) auf das Vertrauen in die Technik. Entsprechend wurden relevante Determinanten der Vertrauensentwicklung in der Mensch-Technik Beziehung beleuchtet und eine Übertragbarkeit essentieller Dynamiken der Vertrauensentwicklung aus zwischenmenschlichen Beziehungen aufgezeigt. Außerdem zeigte sich ein Effekt des wahrgenommenen Anthropomorphismus der Technik auf die positiven ZusammenhĂ€nge zwischen wahrgenommener Kompetenz und Vertrauen in die Technik (Studie 1.1) sowie wahrgenommener WĂ€rme und Vertrauen in die Technik (Studie 1.2). Diese Einsichten unterstĂŒtzen die Relevanz des wahrgenommenen Anthropomorphismus der Technik hinsichtlich der Vertrauensdynamiken in der Mensch-Technik Beziehung sowie der Übertragbarkeit entsprechender Dynamiken aus zwischenmenschlichen Beziehungen. In einer weiteren Studie (Studie 2) wurde die Übertragbarkeit der Dynamiken von zwischenmenschlichen Beziehungen auf die Mensch-Technik Beziehung in Bezug auf die Variable der sozialen Verbundenheit untersucht. Diese kann ebenso relevant fĂŒr die Beziehungsentwicklung und einen möglichen Effekt von Interaktion mit Technik auf soziale BedĂŒrfnisse der Nutzer und Nutzerinnen sein. HierfĂŒr wurde eine zweiwöchige Mensch-Technik Interaktion mit einem dialogfĂ€higen Chatbot exploriert. Dabei wurden potentiell relevante Charakteristika der Technik, beispielsweise, ihre Wahrnehmung als anthropomorph oder sozial prĂ€sent sowie der Nutzer und Nutzerinnen, beispielsweise, die individuelle Tendenz zu anthropomorphisieren sowie das individuelle BedĂŒrfnis nach Zugehörigkeit, fokussiert und ein möglicher Effekt der sozialen Verbundenheit zur Technik auf den Wunsch mit anderen Menschen zu sozialisieren untersucht. Die Ergebnisse zeigten, dass Interaktionsdauer und InteraktionsintensitĂ€t mit der Technik ĂŒber die zweiwöchige Studiendauer hinweg die empfundene soziale Verbundenheit zu dieser positiv voraussagten. Entsprechend wurden Ähnlichkeiten der Dynamiken der Beziehungsentwicklung zu zwischenmenschlichen Beziehungen hervorgehoben. Des Weiteren wurde die Relevanz von Anthropomorphismus der Technik fĂŒr die Entwicklung einer Mensch-Technik Beziehung und die Vergleichbarkeit mit Dynamiken zwischenmenschlicher Beziehungen unterstrichen. Denn je intensiver Menschen mit der Technik interagierten, umso menschenĂ€hnlicher nahmen sie diese wahr und fĂŒhlten sich infolgedessen umso stĂ€rker sozial verbunden mit ihr. Ebenso, je lĂ€nger und intensiver Menschen mit der Technik interagierten, umso sozial prĂ€senter nahmen sie diese wahr und fĂŒhlten sich infolgedessen umso stĂ€rker sozial verbunden mit ihr. WĂ€hrend sich wider Erwarten kein Zusammenhang zwischen der sozialen Verbundenheit zur Technik und dem Wunsch, mit anderen Menschen zu sozialisieren, zeigte, wurde dieser Zusammenhang im Rahmen der Studien 3.1, 3.2 und 4 nĂ€her exploriert. Im Rahmen zweier konsekutiver, experimenteller Studien (Studie 3.1/3.2) wurde das Potential von anthropomorphen Technologien, soziale BedĂŒrfnisse zu erfĂŒllen untersucht sowie der Frage nachgegangen, inwiefern individuell wahrgenommener Anthropomorphismus mit sozialen BedĂŒrfnissen korreliert. In beiden Studien wurden soziale Exklusion und Anthropomorphismus der Technik manipuliert, Anthropomorphismus jedoch in den Studien jeweils unterschiedlich. In einer Studie (Studie 3.1) beantworteten Versuchspersonen anthropomorphe (vs. nicht anthropomorphe) Fragen ĂŒber ihr eigenes Smartphone. In der anderen Studie (Studie 3.2) wurden sie mit Smartphone-Designs mit anthropomorphen (vs. nicht anthropomorphen) Merkmalen konfrontiert. In beiden Studien zeigten sich keine Effekte von Anthropomorphismus und sozialer Exklusion auf die verhaltensbezogene Intention oder die Bereitschaft mit anderen zu sozialisieren. Jedoch zeigte sich in Studie 3.1 ĂŒbergreifend eine positive Korrelation zwischen der Bereitschaft mit anderen Menschen zu sozialisieren und dem wahrgenommenen Anthropomorphismus der Technik. Ergebnisse der Studie 3.2 unterstĂŒtzten diesen Befund und implizierten zusĂ€tzlich, dass dieser Zusammenhang fĂŒr Menschen, die eine hohe Tendenz zu anthropomorphisieren aufwiesen und gleichzeitig mit einer Technik mit anthropomorpher Gestaltung in Bezug auf deren Erscheinung konfrontiert waren, besonders ausgeprĂ€gt war. Insgesamt sprechen diese Einsichten fĂŒr einen Zusammenhang zwischen sozialen BedĂŒrfnissen und Anthropomorphismus und deuten auf eine Relevanz von individuellen und kontextuellen Faktoren hin, die verstĂ€rkend wirken können. Als ErgĂ€nzung der erlĂ€uterten Befunde sowie zur UnterstĂŒtzung eines tiefgrĂŒndigen VerstĂ€ndnisses der Mensch-Technik Beziehung und des Potentials dieser, soziale BedĂŒrfnisse der Nutzer und Nutzerinnen anzusprechen, wurde eine qualitative Interviewstudie durchgefĂŒhrt (Studie 4). Die gewonnenen Einsichten unterstĂŒtzen das Potential anthropomorpher Technik, soziale BedĂŒrfnisse der Nutzer und Nutzerinnen auf bestimmte Wege anzusprechen, aber zeigten auch essentielle Unterschiede in der QualitĂ€t der Mensch-Technik und zwischenmenschlichen Interaktion. Zu Beispielen gehören fehlende Reaktionen der Technik auf Nutzer und Nutzerinnen auf einer inhaltlichen, emotionalen und physischen Ebene sowie das Ausbleiben der Befriedigung sozialer BedĂŒrfnisse durch die Interaktion mit Technik. ZusĂ€tzlich weisen die Studieneinsichten auf einen Effekt sozialer ErwĂŒnschtheit diesbezĂŒglich hin, zumal die Interaktion mit Technik, die zwischenmenschlicher Interaktion Ă€hnelt, hĂ€ufig mit eher negativen Reaktionen Dritter assoziiert wurde. Im Anschluss an einen Überblick und die kurze Zusammenfassung der empirischen Studien dieser Dissertation wird deren Beitrag in Hinblick auf bisherige Forschung diskutiert. Darauf folgt eine ErlĂ€uterung ĂŒbergreifender theoretischer und praktischer Implikationen dieser Arbeit. Theoretische Implikationen fokussieren hauptsĂ€chlich wie die vorliegende Dissertation das VerstĂ€ndnis theoretischer und empirischer Arbeiten im Rahmen des „computers are social actors“ Paradigmas vertieft und zusĂ€tzlich erweitert. DarĂŒber hinaus wird die diesbezĂŒgliche Rolle von Anthropomorphismus der Technik als PhĂ€nomen beleuchtet. Über die Exploration des sozialen Charakters der Mensch-Technik Beziehung hinaus, liefert die vorliegende Arbeit Einsichten zum Potential der Mensch-Technik Beziehung soziale BedĂŒrfnisse der Nutzer und Nutzerinnen insofern zu adressieren, dass Konsequenzen fĂŒr zwischenmenschliche Beziehungen entstehen können. Implikationen fĂŒr die Praxis beziehen sich auf Einsichten in Hinblick auf Design-Beispiele, welche die Entwicklung von Faktoren, die zentral fĂŒr die Mensch-Technik Beziehung sein können, unterstĂŒtzen können. DarĂŒber hinaus laden die Implikationen ein, ĂŒber das Design und die Anwendungskontexte von Technologien zu reflektieren, um einen verantwortungsvollen Umgang mit Technologien im Alltag der Menschen zu fördern. Abschließend werden allgemeine Limitationen der vorliegenden Arbeit diskutiert und mögliche Richtungen fĂŒr zukĂŒnftige Forschung aufgezeigt

    Trust in Human-Robot Interaction Within Healthcare Services: A Review Study

    Get PDF
    There has always been a dilemma of the extent to which human can rely on machines in different activities of daily living. Ranging from riding on a self-driving car to having an iRobot vacuum clean the living room. However, when it comes to healthcare settings where robots are intended to work next to human, making decision gets difficult because repercussions may jeopardize people’s life. That has led scientists and engineers to take one step back and think out of the box. Having concept of trust under scrutiny, this study helps deciphering complex human-robot interaction (HRI) attributes. Screening essential constituents of what shapes the trust in human mind as s/he is working with a robot will provide a more in-depth insight through how to build and consolidate the trust. In physiotherapeutic realm, this feeds into improving safety protocols and level of comfort; as well as increasing the efficacy of robot-assisted physical therapy and rehabilitation. This paper provides a comprehensive framework for measuring trust through introducing several scenarios that are prevalent in rehabilitation environment. This proposed framework highlights importance of clear communication between physicians and how they expect robot to intervene in a human centered task. In addition, it reflects on patients’ perception of robot assistance. Ultimately, recommendations are made in order to maximize trust earned from the patients which then feeds into enhancing efficacy of the therapy. This is an ongoing study; authors are working with a local hospital to implement the know in a real-world application

    The Influence of Anthropomorphic Chatbot Design on Consumer Tolerance of Service Failures: The Mediating Roles of Attachment and Cognitive Dissonance

    Get PDF
    Problem statement: The widespread use of chatbots in hospitality and tourism leads to inevitable service failures. Although research has investigated the influence of chatbots` anthropomorphic cues, comprehending how distinct anthropomorphic cues influence user behavior in service failure is still limited. Methodology: To explore how the anthropomorphic design of chatbots affect user`s tolerance for service failure, this research conducts a 2 (anthropomorphic appearance: 3D vs. 2D) x 2 (language style: informal vs. formal) x 2 (interdependent self-construal: high vs. low) between-subject online experiment. Result: Results show that the congruent anthropomorphic cues of chatbots can significantly improve consumers \u27 tolerance, where attachment mediates this process positively. Additionally, the interdependent self-construal level plays a positive moderating role in this process. Implications: This study contributes theoretically by explicating anthropomorphism in attachment and cognitive dissonance theory and extending the understanding of self-construal theory. Moreover, the study provides recommendations for managers to design effective anthropopathic chatbots

    Influence Of Task-role Mental Models On Human Interpretation Of Robot Motion Behavior

    Get PDF
    The transition in robotics from tools to teammates has begun. However, the benefit autonomous robots provide will be diminished if human teammates misinterpret robot behaviors. Applying mental model theory as the organizing framework for human understanding of robots, the current empirical study examined the influence of task-role mental models of robots on the interpretation of robot motion behaviors, and the resulting impact on subjective ratings of robots. Observers (N = 120) were exposed to robot behaviors that were either congruent or incongruent with their task-role mental model, by experimental manipulation of preparatory robot task-role information to influence mental models (i.e., security guard, groundskeeper, or no information), the robot\u27s actual task-role behaviors (i.e., security guard or groundskeeper), and the order in which these robot behaviors were presented. The results of the research supported the hypothesis that observers with congruent mental models were significantly more accurate in interpreting the motion behaviors of the robot than observers without a specific mental model. Additionally, an incongruent mental model, under certain circumstances, significantly hindered an observer\u27s interpretation accuracy, resulting in subjective sureness of inaccurate interpretations. The strength of the effects that mental models had on the interpretation and assessment of robot behaviors was thought to have been moderated by the ease with which a particular mental model could reasonably explain the robot\u27s behavior, termed mental model applicability. Finally, positive associations were found between differences in observers\u27 interpretation accuracy and differences in subjective ratings of robot intelligence, safety, and trustworthiness. The current research offers implications for the relationships between mental model components, as well as implications for designing robot behaviors to appear more transparent, or opaque, to humans
    • 

    corecore