554 research outputs found

    Likelihood of Questioning AI-Based Recommendations Due to Perceived Racial/Gender Bias

    Get PDF
    Advances in artificial intelligence (AI) are giving rise to a multitude of AI-embedded technologies that are increasingly impacting all aspects of modern society. Yet, there is a paucity of rigorous research that advances understanding of when, and which type of, individuals are more likely to question AI-based recommendations due to perceived racial and gender bias. This study, which is part of a larger research stream contributes to knowledge by using a scenario-based survey that was issued to a sample of 387 U.S. participants. The findings suggest that considering perceived racial and gender bias, human resource (HR) recruitment and financial product/service procurement scenarios exhibit a higher questioning likelihood. Meanwhile, the healthcare scenario presents the lowest questioning likelihood. Furthermore, in the context of this study, U.S. participants tend to be more susceptible to questioning AI-based recommendations due to perceived racial bias rather than gender bias

    POVERTY LAWGORITHMS A Poverty Lawyer’s Guide to Fighting Automated Decision-Making Harms on Low-Income Communities

    Get PDF
    Automated decision-making systems make decisions about our lives, and those with low-socioeconomic status often bear the brunt of the harms these systems cause. Poverty Lawgorithms: A Poverty Lawyers Guide to Fighting Automated Decision-Making Harms on Low-Income Communities is a guide by Data & Society Faculty Fellow Michele Gilman to familiarize fellow poverty and civil legal services lawyers with the ins and outs of data-centric and automated-decision making systems, so that they can clearly understand the sources of the problems their clients are facing and effectively advocate on their behalf

    Masked by Trust: Bias in Library Discovery

    Get PDF
    The rise of Google and its integration into nearly every aspect of our lives has pushed libraries to adopt similar Google-like search tools, called discovery systems. Because these tools are provided by libraries and search scholarly materials rather than the open web, we often assume they are more accurate or reliable than their general-purpose peers like Google or Bing. But discovery systems are still software written by people with prejudices and biases, library software vendors are subject to strong commercial pressures that are often hidden behind diffuse collection-development contracts and layers of administration, and they struggle to integrate content from thousands of different vendors and their collective disregard for consistent metadata. Library discovery systems struggle with accuracy, relevance, and human biases, and these shortcomings have the potential to shape the academic research and worldviews of the students and faculty who rely on them. While human bias, commercial interests, and problematic metadata have long affected researchers\u27 access to information, algorithms in library discovery systems increase the scale of the negative effects on users, while libraries continue to promote their objective and neutral search tools

    Repairing Innovation: A Study of Integrating AI in Clinical Care

    Get PDF
    Over the past two years, a multi-disciplinary team of clinicians and technologists associated with Duke University and Duke Health system have developed and implemented Sepsis Watch, a sociotechnical system combining an artificial intelligence (AI) deep learning model with new hospital protocols to raise the quality of sepsis treatment. Sepsis is a widespread and deadly condition that can develop from any infection and is one of the most common causes of death in hospitals. And while sepsis is treatable, it is notoriously difficult to diagnose consistently. This makes sepsis a prime candidate for AI-based interventions, where new approaches to patient data might raise levels of detection, treatment, and, ultimately, patient outcomes in the form of fewer deaths.As an application of AI, the deep learning model tends to eclipse the other parts of the system; in practice, Sepsis Watch is constituted by a complex combination of human labor and expertise, as well as technical and institutional infrastructures. This report brings into focus the critical role of human labor and organizational context in developing an effective clinical intervention by framing Sepsis Watch as a complex sociotechnical system, not just a machine learning model
    • …
    corecore