1,561 research outputs found

    50 Years of Test (Un)fairness: Lessons for Machine Learning

    Full text link
    Quantitative definitions of what is unfair and what is fair have been introduced in multiple disciplines for well over 50 years, including in education, hiring, and machine learning. We trace how the notion of fairness has been defined within the testing communities of education and hiring over the past half century, exploring the cultural and social context in which different fairness definitions have emerged. In some cases, earlier definitions of fairness are similar or identical to definitions of fairness in current machine learning research, and foreshadow current formal work. In other cases, insights into what fairness means and how to measure it have largely gone overlooked. We compare past and current notions of fairness along several dimensions, including the fairness criteria, the focus of the criteria (e.g., a test, a model, or its use), the relationship of fairness to individuals, groups, and subgroups, and the mathematical method for measuring fairness (e.g., classification, regression). This work points the way towards future research and measurement of (un)fairness that builds from our modern understanding of fairness while incorporating insights from the past.Comment: FAT* '19: Conference on Fairness, Accountability, and Transparency (FAT* '19), January 29--31, 2019, Atlanta, GA, US

    Explainable Disparity Compensation for Efficient Fair Ranking

    Full text link
    Ranking functions that are used in decision systems often produce disparate results for different populations because of bias in the underlying data. Addressing, and compensating for, these disparate outcomes is a critical problem for fair decision-making. Recent compensatory measures have mostly focused on opaque transformations of the ranking functions to satisfy fairness guarantees or on the use of quotas or set-asides to guarantee a minimum number of positive outcomes to members of underrepresented groups. In this paper we propose easily explainable data-driven compensatory measures for ranking functions. Our measures rely on the generation of bonus points given to members of underrepresented groups to address disparity in the ranking function. The bonus points can be set in advance, and can be combined, allowing for considering the intersections of representations and giving better transparency to stakeholders. We propose efficient sampling-based algorithms to calculate the number of bonus points to minimize disparity. We validate our algorithms using real-world school admissions and recidivism datasets, and compare our results with that of existing fair ranking algorithms.Comment: 22 pages, 5 figure

    Economic shocks on subjective well-being: re-assessing the determinants of life-satisfaction after the 2008 financial crisis

    Get PDF
    The paper investigates the extent to which life-satisfaction is biased by peer-comparison by looking at the relative value attached to the different domains of life-satisfaction, as suggested by Easterlin (Economics and happiness: framing the analysis, Oxford University Press, New York, 2005), by social group. We postulate that group membership influences the ranking of the satisfaction domains affecting subjective well-being which allows individuals to go back to their individual threshold over time. Using ordered probit models with random effects, the evidence for professional (self-employed vs. employee) and social (male vs. female) groups using the British Household Panel Survey and Understanding Society—UK Household Longitudinal Study from 1996 to 2014 shows that the ranking of the satisfaction domains is group-based suggesting a "keeping up with the Joneses" effect linked to the housing bubble

    AI Fairness at Subgroup Level – A Structured Literature Review

    Get PDF
    AI applications in practice often fail to gain the required acceptance by stakeholders due to unfairness issues. Research has primarily investigated AI fairness on individual or group levels. However, increasing research indicates shortcomings in this two-fold view. Particularly, the non-inclusion of the heterogeneity within different groups leads to increasing demand for specific fairness consideration at the subgroup level. Subgroups emerge from the conjunction of several protected attributes. An equal distribution of classified individuals between subgroups is the fundamental goal. This paper analyzes the fundamentals of subgroup fairness and its integration in group and individual fairness. Based on a literature review, we analyze the existing concepts of subgroup fairness in research. Our paper raises awareness for this primary neglected topic in IS research and contributes to the understanding of AI subgroup fairness by providing a deeper understanding of the underlying concepts and their implications on AI development and operation in practice
    • …
    corecore