19 research outputs found

    'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions

    Full text link
    Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.Comment: 14 pages, 3 figures, ACM Conference on Human Factors in Computing Systems (CHI'18), April 21--26, Montreal, Canad

    Fair Algorithms in Organizations: A Performative-Sensemaking Model

    Get PDF
    The past few years have seen an unprecedented explosion of interest in fair machine learning algorithms. Such algorithms are increasingly being deployed to improve fairness in high-stakes decisions in organizations, such as hiring and risk assessments. Yet, despite early optimism, recent empirical studies suggest that the use of fair algorithms is highly unpredictable and may not necessarily enhance fairness. In this paper, we develop a conceptual model that seeks to unpack the dynamic sensemaking and sensegiving processes associated with the use of fair algorithms in organizations. By adopting a performative-sensemaking lens, we aim to systematically shed light on how the use of fair algorithms can produce new normative realities in organizations, i.e. new ways to perform fairness. The paper contributes to the growing literature on algorithmic fairness and practice-based studies of IS phenomena

    Explainable Information Security: Development of a Construct and Instrument

    Get PDF
    Despite the increasing efforts to encourage information security (InfoSec) compliance, employees’ refusal to follow and adopt InfoSec remains a challenge for organisations. Advancements in the behavioural InfoSec field have recently highlighted the importance of developing usable and employeecentric InfoSec that can motivate InfoSec compliance more effectively. In this research, we conceptualise the theoretical structure for a new concept called explainable InfoSec and develop a research instrument for collecting data about this concept. Data was then collected from 724 office workers via an online survey. Exploratory and confirmatory factor analyses were performed to validate the theoretical structure of the explainable InfoSec construct, and we performed structural equation modelling to examine the construct’s impact on intention to comply with organisational InfoSec. The validated theoretical structure of explainable InfoSec consists of two dimensions, fairness and transparency, and the construct was found to positively influence compliance intention

    Artificial Intelligence and Patient-Centered Decision-Making

    Get PDF
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient

    Modern Centaurs: How Humans and AI Systems Interact in Sales Forecasting

    Get PDF
    Recent achievements of artificial intelligence (AI) have caused organizations to increasingly bring AI capabilities into their core business processes. Such AI-supported business processes often result in human+AI centaurs, which consist of an AI system, which performs most of the execution, and humans, who monitor this execution and occasionally provide additional inputs and overrides. Using sales data from Walmart, we conduct an online study to investigate if human supervision can improve upon state-of-the-art AI forecasts. Furthermore, we analyze the perceptions and behavioral intentions of the human participants over time. We find that human interventions consistently lead to less accurate forecasts and that participants initially underestimate the AI system’s accuracy and overestimate their own potential to improve upon the AI forecasts. However, perceptions quickly shift over the course of the study, causing the participants to perceive the AI system increasingly favorably, which also leads to behavioral changes and better overall system performance

    How Does AI Fail Us? A Typological Theorization of AI Failures

    Get PDF
    AI incidents, often resulting from the complex interplay of algorithms, human agents, and situations, violate norms and can cause minor or catastrophic errors. This study systematically examines these incidents by developing a typology of AI failure and linking these modes to AI task types. Using a computationally intensive grounded theory approach, we analyzed 466 unique reported real-world AI incidents from 2013 to 2023. Our findings reveal an AI failure typology with six modes, including artifact malfunction, artifact misuse, algorithmic bias, agency oversight, situational unresponsiveness, and value misalignment. Furthermore, we explore the relationship between these failure modes and the tasks performed by AI, uncovering four propositions that provide a framework for future research. Our study contributes to the literature by offering a more holistic perspective on the challenges faced by AI-powered systems, beyond the critical challenges of fairness, transparency, and responsibility noted by the literature

    The Flaws of Policies Requiring Human Oversight of Government Algorithms

    Full text link
    As algorithms become an influential component of government decision-making around the world, policymakers have debated how governments can attain the benefits of algorithms while preventing the harms of algorithms. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. Despite the widespread turn to human oversight, these policies rest on an uninterrogated assumption: that people are able to effectively oversee algorithmic decision-making. In this article, I survey 41 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a shift from human oversight to institutional oversight as the central mechanism for regulating government algorithms. This institutional approach operates in two stages. First, agencies must justify that it is appropriate to incorporate an algorithm into decision-making and that any proposed forms of human oversight are supported by empirical evidence. Second, these justifications must receive democratic public review and approval before the agency can adopt the algorithm

    Erasing the Bias Against Using Artificial Intelligence to Predict Future Criminality: Algorithms are Color Blind and Never Tire

    Get PDF
    Many problems in the criminal justice system would be solved if we could accurately determine which offenders would commit offenses in the future. The likelihood that a person will commit a crime in the future is the single most important consideration that influences sentencing outcomes. It is relevant to the objectives of community protection, specific deterrence, and rehabilitation. The risk of future offending is also a cardinal consideration in bail and probation decisions. Empirical evidence establishes that judges are poor predictors of future offending—their decisions are barely more accurate than the toss of a coin. This undermines the efficacy and integrity of the criminal justice system. Modern artificial intelligence systems are much more accurate in determining if a defendant will commit future crimes. Yet, the move towards using artificial intelligence in the criminal justice system is slowing because of increasing concerns regarding the lack of transparency of algorithms and claims that the algorithms are imbedded with biased and racist sentiments. Criticisms have also been leveled at the reliability of algorithmic determinations. In this Article, we undertake an examination of the desirability of using algorithms to predict future offending and in the process analyze the innate resistance that human have towards deferring decisions of this nature to computers. It emerges that most people have an irrational distrust of computer decision-making. This phenomenon is termed “algorithmic aversion.” We provide a number of recommendations regarding the steps that are necessary to surmount algorithmic aversion and lay the groundwork for the development of fairer and more efficient sentencing, bail, and probation systems

    'It's reducing a human being to a percentage'; Perceptions of justice in algorithmic decisions

    No full text
    Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to ‘meaningful information about the logic’ behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people’s perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles—under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no ‘best’ approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions
    corecore