8 research outputs found

    Amongst a Multitude of Algorithms: How Distrust Transfers Between Social and Technical Trust Referents in the AI-Driven Organization

    Get PDF
    Although trust is identified as critical for successfully integrating Artificial Intelligence (AI) into organizations, we know little about trust in AI within the organizational context and even less about distrust in AI. Drawing from a longitudinal case study, in which we follow a data analytics team within an organization striving to become AI-driven, this paper reveals how distrust in AI unfolds in an organizational setting shaped by several distrust dynamics. We present three significant insights. First, distrust in AI is situated and involves both social and technical trust referents. Second, distrust is misattributed when a trust referent is rendered partly invisible to the trustor. Finally, distrust can be transferred between social and technical trust referents. We contribute to the growing literature on integrating AI in organizations by presenting a model of distrust transference activated by social and technical trust referents

    Sustainable AI : An inventory of the state of knowledge of ethical, social, and legal challenges related to artificial intelligence

    Get PDF
    This report is an inventory of the state of knowledge of ethical, social, and legal challenges related to artificial intelligence conducted within the Swedish Vinnova-funded project “Hållbar AI – AI Ethics and Sustainability”, led by Anna Felländer. Based on a review and mapping of reports and studies, a quantitative and bibliometric analysis, and in-depth analyses of the healt- care sector, the telecom sector, and digital platforms, the report proposes three recommendations. Sustainable AI requires: 1. a broad focus on AI governance and regulation issues, 2. promoting multi-disciplinary collaboration, and 3. building trust in AI applications and applied machine-learning, which is a matter of key importance and requires further study of the relationship between transparency and accountability

    Dahlander et al. (2023) - Blinded by the Person

    No full text
    This is the data for the article "Blinded by the Person? Experimental Evidence from Idea Evaluation" (Dahlander et al., 2023). It includes data and code for the online experiment and the vignette study. We cannot share the data from the field experiment because they are proprietary

    Blinded by the person? Experimental evidence from idea evaluation

    No full text
    Research Summary Seeking causal evidence on biases in idea evaluation, we conducted a field experiment in a large multinational company with two conditions: (a) blind evaluation, in which managers received no proposer information, and (b) non-blind evaluation, in which they received the proposer's name, unit, and location. To our surprise—and in contrast to the preregistered hypotheses—we found no biases against women and proposers from different units and locations, which blinding could ameliorate. Addressing challenges that remained intractable in the field experiment, we conducted an online experiment, which replicated the null findings. A final vignette study showed that people overestimated the magnitude of the biases. The studies suggest that idea evaluation can be less prone to biases than previously assumed and that evaluators separate ideas from proposers. Managerial Summary We wanted to find out if there were biases in the way managers evaluate ideas from their employees. We did a field experiment in a large multinational technology company where we tested two different ways of evaluating ideas: one where managers did not know anything about the person who came up with the idea and one where they did know the person's name, which unit they worked for, and where they were located. The results were surprising. We did not find any bias against women and employees that did not work in the same location and unit as the evaluator. Managers are advised that hiding the identity of idea proposers (from idea evaluators) may not be a silver bullet to improving idea evaluation.ISSN:0143-2095ISSN:1097-026

    Getting AI Implementation Right: Insights from a Global Survey

    No full text
    While the promise of artificial intelligence (AI) is pervasive, many companies struggle with AI implementation challenges. This article presents results from a survey of 2,525 decision-makers with AI experience in China, Germany, India, the United Kingdom, and the United States-as well as interviews with 16 AI implementation experts-in order to understand the challenges companies face when implementing AI. The study covers technological, organizational, and cultural factors and identifies key challenges and solutions for AI implementation. This article develops a diagnostic framework to help executives navigate AI challenges as companies gain momentum, manage organization-wide complexities, and curate a network of partners, algorithms, and data sources to create value through AI.ISSN:0008-125

    Debattinlägg: ”Idag integreras artificiell intelligens i människors vardag utan att det finns tillräcklig kunskap om vad det innebär. Lagar och regler släpar efter”

    No full text
    Det behövs en bredare förståelse för AI-teknologins sociala, etiska och rättsliga effekter, skriver sex företrädare för universitetsvärlden och näringslivet. Sverige har goda möjligheter att nå framgång i den kapplöpning som pågår när det gäller artificiell intelligens, AI. Men det kräver att politiker, universitetsvärlden och näringslivet samverkar och bygger upp en tvärvetenskaplig kompetens

    HÅLLBAR AI : Inventering av kunskapsläget för etiska, sociala och rättsliga utmaningar med artificiell intelligens

    No full text
    Detta är en inventering av kunskapsläget för etiska, sociala, och rättsliga utmaningar med artificiell intelligens, utfört i ett Vinnovafinansierat projekt lett av Anna Felländer. Baserat på en kartläggning av rapporter och studier, en kvantitativ och bibliometrisk analys, och områdesfördjupningar inom vård och hälsa, telekom, och digitala plattformar ges tre rekommendationer: Hållbar AI kräver att vi 1. fokuserar regleringsfrågor i vid mening, 2. stimulerar mångvetenskap och samverkan, samt att 3. tillitsbyggande i användningen av samhällsapplicerad artificiell intelligens och maskininlärning är centralt och kräver mer kunskap i relationen mellan transparens och ansvar
    corecore