3 research outputs found

    Trust in Artificial Intelligence: Comparing Trust Processes Between Human and Automated Trustees in Light of Unfair Bias

    Get PDF
    Automated systems based on artifcial intelligence (AI) increasingly support decisions with ethical implications where decision makers need to trust these systems. However, insights regarding trust in automated systems predominantly stem from contexts where the main driver of trust is that systems produce accurate outputs (e.g., alarm systems for monitoring tasks). It remains unclear whether what we know about trust in automated systems translates to application contexts where ethical considerations (e.g., fairness) are crucial in trust development. In personnel selection, as a sample context where ethical considerations are important, we investigate trust processes in light of a trust violation relating to unfair bias and a trust repair intervention. Specifcally, participants evaluated preselection outcomes (i.e., sets of preselected applicants) by either a human or an automated system across twelve selection tasks. We additionally varied information regarding imperfection of the human and automated system. In task rounds fve through eight, the preselected applicants were predominantly male, thus constituting a trust violation due to potential unfair bias. Before task round nine, participants received an excuse for the biased preselection (i.e., a trust repair intervention). The results of the online study showed that participants have initially less trust in automated systems. Furthermore, the trust violation and the trust repair intervention had weaker efects for the automated system. Those efects were partly stronger when highlighting system imperfection. We conclude that insights from classical areas of automation only partially translate to the many emerging application contexts of such systems where ethical considerations are central to trust processes

    Trust in Artificial Intelligence: Comparing trust processes between human and automated trustees in light of unfair bias

    No full text
    Introducing automated systems based on artificial intelligence and machine learning for ethically sensitive decision tasks requires investigating of trust processes in relation to such tasks. In an example of such a task (personnel selection), this study investigates trustworthiness, trust, and reliance in light of a trust violation relating to ethical standards and a trust repair intervention. Specifically, participants evaluated applicant preselection outcomes by either a human or an automated system across twelve personnel selection tasks. We additionally varied information regarding imperfection of the human and automated system. In task rounds five through eight, the preselected applicants were predominantly male, thus constituting a trust violation due to a violation of ethical standards. Before task round nine, participants received an excuse for the biased preselection (i.e., a trust repair intervention). Results showed that participants initially perceived automated systems to be less trustworthy, and had less intention to trust automated systems. Specifically, participants perceived systems to be less able, and flexible, but also less biased – a result that was sustained even in light of unfair bias. Furthermore, in regard to the automated system the trust violation and the trust repair intervention had weaker effects. Those effects were partly stronger when highlighting imperfection for the automated system. We conclude that it is crucial to investigate trust processes in relation to automated systems in ethically sensitive domains such as personnel selection as insights from classical areas of automation might not translate to application contexts where ethical standards are central to trust processes
    corecore