3 research outputs found

    When Do Customers Perceive Artificial Intelligence as Fair? An Assessment of AI-based B2C E-Commerce

    Get PDF
    Artificial intelligence (AI) enables new opportunities for business-to-consumer (B2C) e-commerce services, but it can also lead to customer dissatisfaction if customers perceive the implemented service not to be fair. While we have a broad understanding of the concept of fair AI, a concrete assessment of fair AI from a customer-centric perspective is lacking. Based on systemic service fairness, we conducted 20 in-depth semi-structured customer interviews in the context of B2C e-commerce services. We identified 19 AI fairness rules along four interrelated fairness dimensions: procedural, distributive, interpersonal, and informational. By providing a comprehensive set of AI fairness rules, our research contributes to the information systems (IS) literature on fair AI, service design, and human-computer interaction. Practitioners can leverage these rules for the development and configuration of AI-based B2C e-commerce services

    Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

    Get PDF
    While artificial intelligence (AI) is increasingly applied for decision- making processes, ethical decisions pose challenges for AI applica- tions. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collabora- tion? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that partici- pants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ re- liance on AI: AI recommendations and decisions are accepted more often than the human expert’s. However, AI team experts are per- ceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead

    Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

    Get PDF
    While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert’s. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead
    corecore