1,193 research outputs found

    How to Make Agents and Influence Teammates: Understanding the Social Influence AI Teammates Have in Human-AI Teams

    Get PDF
    The introduction of computational systems in the last few decades has enabled humans to cross geographical, cultural, and even societal boundaries. Whether it was the invention of telephones or file sharing, new technologies have enabled humans to continuously work better together. Artificial Intelligence (AI) has one of the highest levels of potential as one of these technologies. Although AI has a multitude of functions within teaming, such as improving information sciences and analysis, one specific application of AI that has become a critical topic in recent years is the creation of AI systems that act as teammates alongside humans, in what is known as a human-AI team. However, as AI transitions into teammate roles they will garner new responsibilities and abilities, which ultimately gives them a greater influence over teams\u27 shared goals and resources, otherwise known as teaming influence. Moreover, that increase in teaming influence will provide AI teammates with a level of social influence. Unfortunately, while research has observed the impact of teaming influence by examining humans\u27 perception and performance, an explicit and literal understanding of the social influence that facilitates long-term teaming change has yet to be created. This dissertation uses three studies to create a holistic understanding of the underlying social influence that AI teammates possess. Study 1 identifies the fundamental existence of AI teammate social influence and how it pertains to teaming influence. Qualitative data demonstrates that social influence is naturally created as humans actively adapt around AI teammate teaming influence. Furthermore, mixed-methods results demonstrate that the alignment of AI teammate teaming influence with a human\u27s individual motives is the most critical factor in the acceptance of AI teammate teaming influence in existing teams. Study 2 further examines the acceptance of AI teammate teaming and social influence and how the design of AI teammates and humans\u27 individual differences can impact this acceptance. The findings of Study 2 show that humans have the greatest levels of acceptance of AI teammate teaming influence that is comparative to their own teaming influence on a single task, but the acceptance of AI teammate teaming influence across multiple tasks generally decreases as teaming influence increases. Additionally, coworker endorsements are shown to increase the acceptance of high levels of AI teammate teaming influence, and humans that perceive the capabilities of technology, in general, to be greater are potentially more likely to accept AI teammate teaming influence. Finally, Study 3 explores how the teaming and social influence possessed by AI teammates change when presented in a team that also contains teaming influence from multiple human teammates, which means social influence between humans also exists. Results demonstrate that AI teammate social influence can drive humans to prefer and observe their human teammates over their AI teammates, but humans\u27 behavioral adaptations are more centered around their AI teammates than their human teammates. These effects demonstrate that AI teammate social influence, when in the presence of human-human teaming and social influence, retains potency, but its effects are different when impacting either perception or behavior. The above three studies fill a currently under-served research gap in human-AI teaming, which is both the understanding of AI teammate social influence and humans\u27 acceptance of it. In addition, each study conducted within this dissertation synthesizes its findings and contributions into actionable design recommendations that will serve as foundational design principles to allow the initial acceptance of AI teammates within society. Therefore, not only will the research community benefit from the results discussed throughout this dissertation, but so too will the developers, designers, and human teammates of human-AI teams

    Cross-Border Collaboration in Disaster Management

    Get PDF
    Wenn sich eine Katastrophe ereignet, ist eine schnelle und koordinierte Reaktion der verschiedenen Krisenmanagementakteure unerlässlich, um die vorhandenen Ressourcen bestmöglich einzusetzen und somit ihre Auswirkungen zu begrenzen. Dieses Zusammenspiel wird erschwert, wenn die Katastrophe mehrere Länder betrifft. Neben den unterschiedlichen Regelungen und Systemen spielen dann auch kulturelle Einflüsse wie Sprachbarrieren oder mangelndes Vertrauen eine entscheidende Rolle. Obwohl die Resilienz von Grenzgebieten von fundamentaler Bedeutung ist, wird diese in der wissenschaftlichen Literatur immer noch unterschätzt. Im ersten Teil dieser Arbeit wird ein agentenbasiertes Modell zur Untersuchung der organisationsübergreifenden Zusammenarbeit bei Katastropheneinsätzen in einer Grenzregion vorgestellt. Indem Kommunikationsprotokolle aus der Literatur auf den Kontext der grenzüberschreitenden Kooperation erweitert werden, analysiert das Modell die globale Dynamik, die aus lokalen Entscheidungen resultiert. Ein szenariobasierter Ansatz zeigt, dass höheres Vertrauen zwar zu signifikant besseren Versorgungsraten führt, der Abbau von Sprachbarrieren aber noch effizienter ist. Insbesondere gilt dies, wenn die Akteure die Sprache des Nachbarlandes direkt sprechen, anstatt sich auf eine allgemeine Lingua franca zu verlassen. Die Untersuchung der Koordination zeigt, dass Informationsflüsse entlang der hierarchischen Organisationsstruktur am erfolgreichsten sind, während spontane Zusammenarbeit durch ein etabliertes informelles Netzwerk privater Kontakte den Informationsaustausch ergänzen und in dynamischen Umgebungen einen Vorteil darstellen kann. Darüber hinaus verdoppelt die Einbindung von Spontanfreiwilligen den Koordinationsaufwand. Die Koordination über beide Dimensionen, zum einen die Einbindung in den Katastrophenschutz und zum anderen über Grenzen hinweg, führt jedoch zu einer optimalen Versorgung der betroffenen Bevölkerung. In einem zweiten Teil stellt diese Arbeit ein innovatives empirisches Studiendesign vor, das auf transnationalem Sozialkapital und Weiners Motivationstheorie basiert, um prosoziale Beziehungen der Menschen über nationale Grenzen hinweg zu quantifizieren. Regionale Beziehungen innerhalb der Länder werden dabei als Vergleichsbasis genommen. Die mittels repräsentativer Telefoninterviews in Deutschland, Frankreich und der deutsch-französischen Grenzregion erhobenen Daten belegen die Hypothese, dass das Sozialkapital und die Hilfsbereitschaft über die deutsch-französische Grenze hinweg mindestens so hoch ist wie das regionale Sozialkapital und die Hilfsbereitschaft innerhalb der jeweiligen Länder. Folglich liefert die Arbeit wertvolle Erkenntnisse für Entscheidungsträger, um wesentliche Barrieren in der grenzüberschreitenden Kooperation abzubauen und damit die grenzüberschreitende Resilienz bei zukünftigen Katastrophen zu verbessern. Implikationen für die heutige Zeit in Bezug auf Globalisierung versus aufkommendem Nationalismus sowie Auswirkungen von (Natur-) Katastrophen werden diskutiert

    Using and Interacting with AI-Based Intelligent Technologies: Practical Applications on Autonomous Cars and Chatbots

    Get PDF
    L'intelligence artificielle (IA) est souvent considérée comme l'une des innovations les plus prometteuses et perturbatrices de notre époque. Malgré son développement rapide, il existe encore un haut niveau d'incertitude quant à la manière dont les consommateurs vont adopter l'IA. Dans ce contexte, cette thèse de quatre articles vise à comprendre comment les consommateurs utilisent et interagissent avec les technologies intelligentes, en se concentrant en particulier sur deux applications: les chatbots et les véhicules autonomes (VA). Dans un premier temps, nous effectuons une analyse approfondie de la littérature marketing existante en adoptant les approches scientométriques et la méthode Theory-Context-Characteristics-Methodology. Ainsi, nous définissons nos questions de recherche concernant 1) les réactions cognitives et émotionnelles des consommateurs lorsqu'ils interagissent avec des technologies basées sur l'IA capables de simuler des conversations de type humain ; 2) les facteurs affectant l'intention des consommateurs d'utiliser des technologies basées sur l'IA, et leur évolution à travers les niveaux d'automatisation ; 3) les préoccupations éthiques des consommateurs envers les produits IA et leur effet sur la confiance et les intentions d'utilisation. En mettant en œuvre trois plans expérimentaux inter-sujets, nous répondons à notre première question de recherche en comparant les interactions humain-humain et humain-chatbot et les interactions avec des chatbots hautement anthropomorphes et faiblement anthropomorphes. Nous nous appuyons principalement sur la Théorie de l'Evaluation Cognitive des Emotions (Roseman et al. 1990), la Théorie de l'Attribution (Weiner 2000) et la Théorie de l'Anthropomorphisme (Aggarwal and McGill 2007 ; Epley et al. 2018), en montrant que les réponses des consommateurs diffèrent lorsqu'ils interagissent avec un humain et un chatbot, en fonction des différentes attributions de responsabilité et des différents niveaux d'anthropomorphisme. Ensuite, nous étudions la manière dont l'expérience des consommateurs avec différents niveaux d'automatisation affecte les perceptions des technologies basées sur l'IA. Nous utilisons les VA comme unité d'analyse, en intégrant le cadre UTAUT avec la Théorie de la Confiance (Mcknight et al. 2011), la Théorie du Calcul de la Vie Privée (Dinev et Hart 2006) et la Théorie du Bien-être (Diener 1999). Après la mise en œuvre d'un design intra-sujet avec des études sur le terrain et sur simulateur, les résultats suggèrent que la différenciation entre les différents niveaux d'automatisation joue un rôle clé pour mieux comprendre les facteurs d’adoption ainsi que les réactions cognitives lors de l'utilisation d'applications intelligentes. Enfin, nous étudions les préoccupations éthiques des consommateurs concernant les chatbots et les VA. Nous utilisons une approche mixte, en utilisant la modélisation thématique et la modélisation par équation structurelle. Nous montrons que pour les chatbots, la composante interactionnelle et émotionnelle de la technologie est prédominante, les consommateurs soulignant, entre autres, le design émotionnel et le manque d'adaptabilité comme principaux soucis éthiques. En revanche, pour les VA, les préoccupations éthiques concernent plutôt des perceptions cognitives liées à la transparence des algorithmes, à la sécurité de la technologie et à l'accessibilité. Notre recherche offre des contributions à la littérature émergente sur les comportements des consommateurs liés aux produits intelligents en soulignant la nécessité de prendre en compte la complexité des technologies d'IA à travers leurs différents niveaux d'automatisation et en fonction de leurs caractéristiques. Nous offrons également des contributions méthodologiques grâce à la mise en œuvre de plans de recherche expérimentaux innovants, utilisant des outils avancés et combinant des approches qualitatives et quantitatives. […]Artificial Intelligence (AI) is often considered as one of the most promising and disruptive innovation of our times. Despite its rapid development, there is still a high level of uncertainty about how consumers are going to adopt AI. In this context, this four-article dissertation aims to comprehend how consumers use and interact with intelligent technologies, in particular focusing on two current applications: chatbots and autonomous vehicles (AVs). First, we conduct an in-depth analysis of the existing marketing literature adopting Scientometric and Theory-Context-Characteristics-Methodology approaches. Thus, we define our research questions related to 1) consumers ‘cognitive and emotional reactions when interacting with AI-based technologies that are able to simulate human-like conversations; 2) factors affecting consumers ‘intention to use AI-based technologies able to make decision in critical situations, and their evolution across levels of automation; 3) consumers ethical concerns towards AI products and their effect on trust and usage intentions. By applying three between-subject experimental designs, we answer our first research question comparing human–human versus human–chatbot interactions and highly anthropomorphic versus lowly anthropomorphic chatbots. We leverage insights mainly from Cognitive Appraisal Theory of Emotions (Roseman et al. 1990), Attribution Theory (Weiner 2000) and Theory of Anthropomorphism (Aggarwal and McGill 2007; Epley et al. 2018), showing that consumers’ responses differ when interacting with a human and a chatbot, according to the different attributions of responsibility and the different levels of anthropomorphism of the service agent. Next, we investigate the way consumers’ experience with different levels of automation affect perceptions of AI-based technologies. We use AVs as unit of analysis, integrating the UTAUT framework with Trust Theory (Mcknight et al. 2011), Privacy Calculus Theory (Dinev and Hart 2006) and Theory of Well-being (Diener 1999; Diener and Chan 2011). After implementing a within subject-design with field and simulator studies, results suggest that differentiating between the different automation levels play a key role to better understand the potential drivers of adoption as well as the cognitive reactions when using intelligent applications. Finally, we investigate consumers’ ethical concerns surrounding chatbots and AVs. We employ a mixed methods approach, using topic modeling and structural equation modeling. We show that for chatbots, the interactional and emotional component of the technology is predominant, as consumers highlight, between others, the emotional design and the lack of adaptability as main ethical issues. However, for autonomous cars, the ethical concerns rather involve cognitive perceptions related to the transparency of the algorithms, the ethical design, the safety of the technology and the accessibility. Our research offers contributions to the emerging literature on consumer behaviors related to intelligent products by highlighting the need to take into account the complexity of AI technologies across their different levels of automation and according to their intrinsic characteristics. We also offer methodological contributions thanks to the implementation of innovative experimental research designs, using advanced tools and combining qualitative and quantitative approaches. To conclude, we present implications for both managers and policymakers who want to implement AIbased disruptive technologies, such as chatbots and AVs

    Cross-Border Collaboration in Disaster Management

    Get PDF
    In recent years, disaster events spreading across national borders have increased, which requires improved collaboration between countries. By means of an agent-based simulation and an empirical study, this thesis provides valuable insights for decision-makers in order to overcome barriers in cross-border cooperation and thus, enhance borderland resilience for future events. Finally, implications for today's world in terms of globalization versus emerging nationalism are discussed

    Converging Measures and an Emergent Model: A Meta-Analysis of Human-Automation Trust Questionnaires

    Full text link
    A significant challenge to measuring human-automation trust is the amount of construct proliferation, models, and questionnaires with highly variable validation. However, all agree that trust is a crucial element of technological acceptance, continued usage, fluency, and teamwork. Herein, we synthesize a consensus model for trust in human-automation interaction by performing a meta-analysis of validated and reliable trust survey instruments. To accomplish this objective, this work identifies the most frequently cited and best-validated human-automation and human-robot trust questionnaires, as well as the most well-established factors, which form the dimensions and antecedents of such trust. To reduce both confusion and construct proliferation, we provide a detailed mapping of terminology between questionnaires. Furthermore, we perform a meta-analysis of the regression models that emerged from those experiments which used multi-factorial survey instruments. Based on this meta-analysis, we demonstrate a convergent experimentally validated model of human-automation trust. This convergent model establishes an integrated framework for future research. It identifies the current boundaries of trust measurement and where further investigation is necessary. We close by discussing choosing and designing an appropriate trust survey instrument. By comparing, mapping, and analyzing well-constructed trust survey instruments, a consensus structure of trust in human-automation interaction is identified. Doing so discloses a more complete basis for measuring trust emerges that is widely applicable. It integrates the academic idea of trust with the colloquial, common-sense one. Given the increasingly recognized importance of trust, especially in human-automation interaction, this work leaves us better positioned to understand and measure it.Comment: 44 pages, 6 figures. Submitted, in part, to ACM Transactions on Human-Robot Interaction (THRI

    Evaluating Privacy Adaptation Presentation Methods to support Social Media Users in their Privacy-Related Decision-Making Process

    Get PDF
    Several privacy scholars have advocated for user-tailored privacy (UTP). A privacy-enhancing adaptive privacy approach to help reconcile users\u27 lack of awareness, privacy management skills and motivation to use available platform privacy features with their need for personalized privacy support in alignment with their privacy preferences. The idea behind UTP is to measure users\u27 privacy characteristics and behaviors, use these measurements to create a personalized model of the user\u27s privacy preferences, and then provide adaptive support to the user in navigating and engaging with the available privacy settings---or even implement certain settings automatically on the user\u27s behalf. To this end, most existing work on UTP has focused on the measurement\u27\u27 and algorithmic modeling\u27\u27 aspect of UTP, however, with less emphasis on the adaptation\u27\u27 aspect. More specifically, limited research efforts have been devoted to the exploration of the presentation of privacy adaptations that align with user privacy preferences. The concept of presentation\u27\u27 goes beyond the visual characteristics of the adaptation: it can profoundly impact the required level of engagement with the system and the user\u27s tendency to follow the suggested privacy adaptation. This dissertation evaluates the potential of three adaptation presentation methods in supporting social media users to make better\u27\u27 privacy protection decisions. These three adaptation presentation methods include 1) automation that involves the automatic application of the privacy settings by the system without user input to alleviate them from having to make frequent privacy decisions; 2) highlights that emphasize certain privacy features to guide users to apply the settings themselves in a subtle but useful manner; and 3) suggestions that can explicitly inform users about the availability of certain settings that can be applied directly by the user. The first study focuses on understanding user perspectives on the different configurations of autonomy and control of the examined three privacy adaptation presentation methods. A second follow-up study examines the effectiveness of these adaptation presentation methods in improving user awareness and engagement with available privacy features. Taking into account social media users\u27 privacy decision-making process (i.e., they often make privacy-related decisions), the final study assesses the impact of privacy-related affect and message framing (i.e., tone style) on users\u27 privacy decisions in adaptation-supported social media environments. We offer insights and provide practical considerations towards the selection and use of optimal\u27\u27 privacy adaptation methods to provide user-tailored privacy decision support

    Developing and Facilitating Temporary Team Mental Models Through an Information-Sharing Recommender System

    Get PDF
    It is well understood that teams are essential and common in many aspects of life, both work and leisure. Due to the importance of teams, much research attention has focused on how to improve team processes and outcomes. Of particular interest are the cognitive aspects of teamwork including team mental models (TMMs). Among many other benefits, TMMs involve team members forming a compatible understanding of the task and team in order to more efficiently make decisions. This understanding is sometimes classified using four TMM domains: equipment (e.g., operating procedures), task (e.g., strategies), team interactions (e.g., interdependencies) and teammates (e.g., tendencies). Of particular interest to this dissertation is accelerating the development of teammate TMMs which include members understanding the knowledge, skills, attitudes, preferences, and tendencies of their teammates. An accurate teammate TMM allows teams to predict and account for the needs and behaviors of their teammates. Although much research has highlighted how the development of the four TMM domains can be supported, promoting the development of teammate TMMs is particularly challenging for a specific type of team: temporary teams. Temporary teams, in contrast to ongoing teams, involve unknown teammates, novel tasks, short task times (alternatively limited interactions), and members disbanding after completing their task. These teams are increasingly used by organizations as they can be agilely formed with individual members selected to accomplish a specific task. Such teams are commonly used in contexts such as film production, the military, emergency response, and software development, just to name a few. Importantly, although these teams benefit greatly from teammate TMMs due to the efficiencies gained in decision making while working under limited deadlines, the literature is severely limited in understanding how to support temporary teams in this way. As prior research has suggested, an opportunity to accelerate teammate TMM development on temporary teams is through the use of technology to selectively share teammate information to support these TMMs. However, this solution poses numerous privacy concerns. This dissertation uses four studies to create a foundational and thorough understanding of how recommender system technology can be used to promote teammate TMMs through information sharing while limiting privacy concerns. Study 1 takes a highly exploratory approach to set a foundation for future dissertation studies. This study investigates what information is perceived to be helpful for promoting teammate TMMs on actual temporary teams. Qualitative data suggests that sharing teammate information related to skills/preferences, conflict management styles, and work ethic/reliability is perceived as beneficial to supporting teammate TMMs. Also, this data provides a foundational understanding for what should be involved in information-sharing recommendations for promoting teammate TMMs. Quantitative results indicate that conflict management data is perceived as more helpful and appropriate to share than personality data. Study 2 investigates the presentation of these recommendations through the factors of anonymity and explanations. Although explanations did not improve trust or satisfaction in the system, providing recommendations associated with a specific teammate name significantly improved several team measures associated with TMMs for actual temporary teams compared to teams who received anonymous recommendations. This study also sheds light on what temporary team members perceive as the benefits to sharing this information and what they perceive as concerns to their privacy. Study 3 investigates how the group/team context and individual differences can influence disclosure behavior when using an information-sharing recommender system. Findings suggest that members of teams who are fully assessed as a team are more willing to unconditionally disclose personal information than members who are assessed as an individual or members who are mixed assessed as an individual and a team. The results also show how different individual differences and different information types are associated with disclosure behavior. Finally, Study 4 investigates how the occurrence and content of explanations can influence disclosure behavior and system perceptions of an information-sharing recommender system. Data from this study highlights how benefit explanations provided during disclosure can increase disclosure and explanations provided during recommendations can influence perceptions of trust competence. Meanwhile, benefit-related explanations can decrease privacy concerns. The aforementioned studies fill numerous research gaps relating to teamwork literature (i.e., TMMs and temporary teams) and recommender system research. In addition to contributions to these fields, this dissertation results in design recommendations that inform both the design of group recommender systems and the novel technology conceptualized through this dissertation, information-sharing recommender systems
    corecore