Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances

Abstract

From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, with the aim of improving the quality of decisions. However, the criteria currently used to assess a system’s ability to improve hybrid decisions is driven by a utilitarian desire to optimise accuracy through a phenomenon known as ‘complementary performance’. This desire puts the design of hybrid decision-making at odds with critical subjective concepts that affect the perception and acceptance of decisions, such as fairness. Fairness as a subjective notion often has a competitive relationship with accuracy and as such, driving complementary behaviour with a utilitarian belief risks driving unfairness in decisions. It is our position that shifting epistemological stances taken in the research and design of human-AI environments is necessary to incorporate the relationship between fairness and accuracy into the notion of ‘complementary behaviour’, in order to observe ‘enhanced’ hybrid human-AI decisions

Similar works

This paper was published in Cronfa at Swansea University.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.