23,630 research outputs found

    Improving fairness in machine learning systems: What do industry practitioners need?

    Full text link
    The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams' challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address industry practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in Computing Systems (CHI 2019

    Designing to Debias: Measuring and Reducing Public Managers’ Anchoring Bias

    Get PDF
    Public managers’ decisions are affected by cognitive biases. For instance, employees’ previous year's performance ratings influence new ratings irrespective of actual performance. Nevertheless, experimental knowledge of public managers’ cognitive biases is limited, and debiasing techniques have rarely been studied. Using a survey experiment on 1,221 public managers and employees in the United Kingdom, this research (1) replicates two experiments on anchoring to establish empirical generalization across institutional contexts and (2) tests a consider-the-opposite debiasing technique. The results indicate that anchoring bias replicates in a different institutional context, although effect sizes differ. Furthermore, a low-cost, low-intensity consider-the-opposite technique mitigates anchoring bias in this survey experiment. An exploratory subgroup analysis indicates that the effect of the intervention depends on context. The next step is to test this strategy in real-world settings

    Is it time we get real? A systematic review of the potential of data-driven technologies to address teachers' implicit biases

    Get PDF
    Data-driven technologies for education, such as artificial intelligence in education (AIEd) systems, learning analytics dashboards, open learner models, and other applications, are often created with an aspiration to help teachers make better, evidence-informed decisions in the classroom. Addressing gender, racial, and other biases inherent to data and algorithms in such applications is seen as a way to increase the responsibility of these systems and has been the focus of much of the research in the field, including systematic reviews. However, implicit biases can also be held by teachers. To the best of our knowledge, this systematic literature review is the first of its kind to investigate what kinds of teacher biases have been impacted by data-driven technologies, how or if these technologies were designed to challenge these biases, and which strategies were most effective at promoting equitable teaching behaviors and decision making. Following PRISMA guidelines, a search of five databases returned n = 359 records of which only n = 2 studies by a single research team were identified as relevant. The findings show that there is minimal evidence that data-driven technologies have been evaluated in their capacity for supporting teachers to make less biased decisions or promote equitable teaching behaviors, even though this capacity is often used as one of the core arguments for the use of data-driven technologies in education. By examining these two studies in conjunction with related studies that did not meet the eligibility criteria during the full-text review, we reveal the approaches that could play an effective role in mitigating teachers' biases, as well as ones that may perpetuate biases. We conclude by summarizing directions for future research that should seek to directly confront teachers' biases through explicit design strategies within teacher tools, to ensure that the impact of biases of both technology (including data, algorithms, models etc.) and teachers are minimized. We propose an extended framework to support future research and design in this area, through motivational, cognitive, and technological debiasing strategies

    The Behavioral Paradox: Why Investor Irrationality Calls for Lighter and Simpler Financial Regulation

    Get PDF
    It is widely believed that behavioral economics justifies more intrusive regulation of financial markets, because people are not fully rational and need to be protected from their quirks. This Article challenges that belief. Firstly, insofar as people can be helped to make better choices, that goal can usually be achieved through light-touch regulations. Secondly, faulty perceptions about markets seem to be best corrected through market-based solutions. Thirdly, increasing regulation does not seem to solve problems caused by lack of market discipline, pricing inefficiencies, and financial innovation; better results may be achieved with freer markets and simpler rules. Fourthly, regulatory rule makers are subject to imperfect rationality, which tends to reduce the quality of regulatory intervention. Finally, regulatory complexity exacerbates the harmful effects of bounded rationality, whereas simple and stable rules give rise to positive learning effects

    Cognitive Bias in Clinical Medicine

    Get PDF
    Cognitive bias is increasingly recognised as an important source of medical error, and is both ubiquitous across clinical practice yet incompletely understood. This increasing awareness of bias has resulted in a surge in clinical and psychological research in the area and development of various ‘debiasing strategies’. This paper describes the potential origins of bias based on ‘dual process thinking’, discusses and illustrates a number of the important biases that occur in clinical practice, and considers potential strategies that might be used to mitigate their effect

    A Framework for Integrating Implicit Bias Recognition into Health Professions Education

    Get PDF
    Existing literature on implicit bias is fragmented and comes from a variety of fields like cognitive psychology, business ethics, and higher education, but implicit-bias-informed educational approaches have been underexplored in health professions education and are difficult to evaluate using existing tools. Despite increasing attention to implicit bias recognition and management in health professions education, many programs struggle to meaningfully integrate these topics into curricula. The authors propose a six-point actionable framework for integrating implicit bias recognition and management into health professions education that draws on the work of previous researchers and includes practical tools to guide curriculum developers. The six key features of this framework are creating a safe and nonthreatening learning context, increasing knowledge about the science of implicit bias, emphasizing how implicit bias influences behaviors and patient outcomes, increasing self-awareness of existing implicit biases, improving conscious efforts to overcome implicit bias, and enhancing awareness of how implicit bias influences others. Important considerations for designing implicit-bias-informed curricula - such as individual and contextual variables, as well as formal and informal cultural influences - are discussed. The authors also outline assessment and evaluation approaches that consider outcomes at individual, organizational, community, and societal levels. The proposed framework may facilitate future research and exploration regarding the use of implicit bias in health professions education

    Human-Centered Design to Address Biases in Artificial Intelligence

    Get PDF
    The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. By recognizing and addressing biases at each stage of the AI life cycle, AI can achieve its potential in health care
    • …
    corecore