13 research outputs found

    Accepting the Familiar: The Effect of Perceived Similarity with AI Agents on Intention to Use and the Mediating Effect of IT Identity

    Get PDF
    With the rise and integration of AI technologies within organizations, our understanding of the impact of this technology on individuals remains limited. Although the IS use literature provides important guidance for organization to increase employees’ willingness to work with new technology, the utilitarian view of prior IS use research limits its application considering the new evolving social interaction between humans and AI agents. We contribute to the IS use literature by implementing a social view to understand the impact of AI agents on an individual’s perception and behavior. By focusing on the main design dimensions of AI agents, we propose a framework that utilizes social psychology theories to explain the impact of those design dimensions on individuals. Specifically, we build on Similarity Attraction Theory to propose an AI similarity-continuance model that aims to explain how similarity with AI agents influence individuals’ IT identity and intention to continue working with it. Through an online brainstorming experiment, we found that similarity with AI agents indeed has a positive impact on IT identity and on the intention to continue working with the AI agent

    Automation transparency: Implications of uncertainty communication for human-automation interaction and interfaces

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic for overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. A driving simulator study was used to assess the impact of dynamically communicating system uncertainties on monitoring, trust, workload, takeovers, and physiological responses. The uncertainty information was conveyed visually using a stylised heart beat combined with a numerical display and users were engaged in a visual search task. Multilevel analysis results suggest that uncertainty communication helps operators calibrate their trust and gain situation awareness prior to critical situations, resulting in safer takeovers. Additionally, eye tracking data indicate that operators can adjust their gaze behaviour in correspondence with the level of uncertainty. However, conveying uncertainties using a visual display significantly increases operator workload and impedes users in the execution of non-driving related tasks

    Trust, Automation Bias and Aversion: Algorithmic Decision-Making in the Context of Credit Scoring

    Get PDF
    Algorithmic decision-making (ADM) systems increasingly take on crucial roles in our technology-driven society, making decisions, for instance, concerning employment, education, finances, and public services. This article aims to identify peoples’ attitudes towards ADM systems and ensuing behaviours when dealing with ADM systems as identified in the literature and in relation to credit scoring. After briefly discussing main characteristics and types of ADM systems, we first consider trust, automation bias, automation complacency and algorithmic aversion as attitudes towards ADM systems. These factors result in various behaviours by users, operators, and managers. Second, we consider how these factors could affect attitudes towards and use of ADM systems within the context of credit scoring. Third, we describe some possible strategies to reduce aversion, bias, and complacency, and consider several ways in which trust could be increased in the context of credit scoring. Importantly, although many advantages in applying ADM systems to complex choice problems can be identified, using ADM systems should be approached with care – e.g., the models ADM systems are based on are sometimes flawed, the data they gather to support or make decisions are easily biased, and the motives for their use unreflected upon or unethical

    Technology as a Social Companion? An Exploration of Individual and Product-Related Factors of Anthropomorphism

    Get PDF
    From chatbots that simulate human conversation to cleaning robots with anthropomorphic appearance, humanlike designed technologies become increasingly present in our society. A growing strand of research focuses on psychological factors and motivations influencing anthropomorphism, that is, the attribution of human characteristics to non-human agents and objects. For example, studies have shown that feeling lonely can come along with attributing anthropomorphic qualities to objects;others imply that anthropomorphism might influence individuals' social needs in return. Such an interrelation could have great societal impact, if, for example, interacting with humanlike technology would reduce the need for interpersonal interaction. Yet, the interrelation between anthropomorphism and social needs has not been studied systematically and individual as well as situational preconditions of anthropomorphism have not been specified. The present research investigates the interrelation between anthropomorphism and social needs on the example of interacting with a smartphone and highlights possible preconditions by means of two experimental studies using a 2 x 2-between-subjects-design, varying social exclusion and anthropomorphism. Our first study (N = 159) showed an overall positive correlation between the willingness to socialize and perceived anthropomorphism. Our second study (N = 236) highlighted that this relationship is especially pronounced for individuals with a high tendency to anthropomorphize, given that the product supports a humanlike perception through its appearance and design cues. In sum, results support an interrelation between social needs and anthropomorphism but also stress individual and contextual strengthening factors. Limitations, theoretical, and practical implications are discussed

    Feel, Don\u27t Think Review of the Application of Neuroscience Methods for Conversational Agent Research

    Get PDF
    Conversational agents (CAs) equipped with human-like features (e.g., name, avatar) have been reported to induce the perception of humanness and social presence in users, which can also increase other aspects of users’ affection, cognition, and behavior. However, current research is primarily based on self-reported measurements, leaving the door open for errors related to the self-serving bias, socially desired responding, negativity bias and others. In this context, applying neuroscience methods (e.g., EEG or MRI) could provide a means to supplement current research. However, it is unclear to what extent such methods have already been applied and what future directions for their application might be. Against this background, we conducted a comprehensive and transdisciplinary review. Based on our sample of 37 articles, we find an increased interest in the topic after 2017, with neural signal and trust/decision-making as upcoming areas of research and five separate research clusters, describing current research trends

    Can Robots Earn Our Trust the Same Way Humans Do?

    Get PDF
    Robots increasingly act as our social counterparts in domains such as healthcare and retail. For these human-robot interactions (HRI) to be effective, a question arises on whether we trust robots the same way we trust humans. We investigated whether the determinants competence and warmth, known to influence interpersonal trust development, influence trust development in HRI, and what role anthropomorphism plays in this interrelation. In two online studies with 2 × 2 between-subjects design, we investigated the role of robot competence (Study 1) and robot warmth (Study 2) in trust development in HRI. Each study explored the role of robot anthropomorphism in the respective interrelation. Videos showing an HRI were used for manipulations of robot competence (through varying gameplay competence) and robot anthropomorphism (through verbal and non-verbal design cues and the robot's presentation within the study introduction) in Study 1 (n = 155) as well as robot warmth (through varying compatibility of intentions with the human player) and robot anthropomorphism (same as Study 1) in Study 2 (n = 157). Results show a positive effect of robot competence (Study 1) and robot warmth (Study 2) on trust development in robots regarding anticipated trust and attributed trustworthiness. Subjective perceptions of competence (Study 1) and warmth (Study 2) mediated the interrelations in question. Considering applied manipulations, robot anthropomorphism neither moderated interrelations of robot competence and trust (Study 1) nor robot warmth and trust (Study 2). Considering subjective perceptions, perceived anthropomorphism moderated the effect of perceived competence (Study 1) and perceived warmth (Study 2) on trust on an attributional level. Overall results support the importance of robot competence and warmth for trust development in HRI and imply transferability regarding determinants of trust development in interpersonal interaction to HRI. Results indicate a possible role of perceived anthropomorphism in these interrelations and support a combined consideration of these variables in future studies. Insights deepen the understanding of key variables and their interaction in trust dynamics in HRI and suggest possibly relevant design factors to enable appropriate trust levels and a resulting desirable HRI. Methodological and conceptual limitations underline benefits of a rather robot-specific approach for future research

    Mechanisms of cognitive trust development in artificial intelligence among front line employees: An empirical examination from a developing economy

    Get PDF
    Drawing upon insights from the trust literature, we conducted two empirical surveys with the front-line employees of firms in Pakistan investigating the factors influencing cognitive trust in artificial intelligence (AI). Study1 consisted of 46 in-depth interviews aimed at exploring factors influencing cognitive trust. Based on the findings of Study 1, we developed a framework to enhance employees’ cognitive trust in AI. We then conducted a quantitative survey (study 2) with 314 employees to validate the proposed model. The findings suggest that AI features positively influence the cognitive trust of employees, while work routine disruptions have negative impact on cognitive trust in AI. The effectiveness of data governance was also found to facilitate employees' trust in data governance and subsequently, employees' cognitive trust in AI. We contribute to the technology trust literature, especial in developing economics. We discuss the implications of our findings for both research and practice

    Oxytocin Facilitates Social Learning by Promoting Conformity to Trusted Individuals

    Get PDF
    There is considerable interest in the role of the neuropeptide oxytocin in promoting social cohesion both in terms of promoting specific social bonds and also more generally for increasing our willingness to trust others and/or to conform to their opinions. These latter findings may also be important in the context of a modulatory role for oxytocin in improving the efficacy of behavioral therapy in psychiatric disorders. However, the original landmark studies claiming an important role for oxytocin in enhancing trust in others, primarily using economic game strategies, have been questioned by subsequent meta-analytic approaches or failure to reproduce findings in different contexts. On the other hand, a growing number of studies have consistently reported that oxytocin promotes conformity to the views of groups of in-group individuals. Most recently we have found that oxytocin can increase acceptance of social advice given by individual experts without influencing their perceived trustworthiness per se, but that increased conformity in this context is associated with how much an expert is initially trusted and liked. Oxytocin can also enhance the impact of information given by experts by facilitating expectancy and placebo effects. Here we therefore propose that a key role for oxytocin is not in facilitating social trust per se but in conforming to, and learning from, trusted individuals who are either in-group members and/or perceived experts. The implications of this for social learning and use of oxytocin as an adjunct to behavioral therapy in psychiatric disorders are discussed

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect users’ self-perceptions, perceptions of the technology, how users interact with the technology, and the users’ performance. Examples include changes in a users’ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in users’ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the users’ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies

    Investigating the acceptance intentions of online shopping assistants in E-commerce interactions: mediating role of trust and effects of consumer demographics

    Get PDF
    Online shopping has various advantages, such as convenience, easy access to information, a greater variety of products or services, discounts, and lower prices. However, the absence of salespeople's personalized assistance decreases the online customer experience. Business-to-consumer e-commerce companies are increasingly implementing online shopping assistants (OSAs), interactive and automated tools used to assist customers without salespeople's assistance. However, no comprehensive model of OSA acceptance in e-commerce exists, including constructs from multiple information system disciplines, sociopsychology, and information security. This study aims to fill these gaps by empirically investigating consumers' intention to accept OSAs from a functional, social, relational, and security perspective. It identifies OSA acceptance factors in e-commerce through an extensive literature review and expert opinion. A research model is proposed after identifying structural relationships among the study's variables from the literature. The study employs partial least squares-structural equation modeling (PLS-SEM) to validate the proposed model empirically. The results indicate that anthropomorphism, attitude, ease of use, enjoyment, privacy, trust, and usefulness are crucial determinants of acceptance variables. There are significant moderating effects of respondents' gender and education on OSA acceptance. The study's results have substantial implications for academia, extending and validating the Technology Acceptance Model (TAM) for OSA acceptance in e-commerce. The study will help e-commerce marketers develop optimal adoption strategies when implementing OSAs on social media platforms
    corecore