1,313 research outputs found

    A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity

    Full text link
    We map the recently proposed notions of algorithmic fairness to economic models of Equality of opportunity (EOP)---an extensively studied ideal of fairness in political philosophy. We formally show that through our conceptual mapping, many existing definition of algorithmic fairness, such as predictive value parity and equality of odds, can be interpreted as special cases of EOP. In this respect, our work serves as a unifying moral framework for understanding existing notions of algorithmic fairness. Most importantly, this framework allows us to explicitly spell out the moral assumptions underlying each notion of fairness, and interpret recent fairness impossibility results in a new light. Last but not least and inspired by luck egalitarian models of EOP, we propose a new family of measures for algorithmic fairness. We illustrate our proposal empirically and show that employing a measure of algorithmic (un)fairness when its underlying moral assumptions are not satisfied, can have devastating consequences for the disadvantaged group's welfare

    Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making

    Get PDF
    We draw attention to an important, yet largely overlooked aspect of evaluating fairness for automated decision making systems---namely risk and welfare considerations. Our proposed family of measures corresponds to the long-established formulations of cardinal social welfare in economics, and is justified by the Rawlsian conception of fairness behind a veil of ignorance. The convex formulation of our welfare-based measures of fairness allows us to integrate them as a constraint into any convex loss minimization pipeline. Our empirical analysis reveals interesting trade-offs between our proposal and (a) prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of individual fairness. Furthermore and perhaps most importantly, our work provides both heuristic justification and empirical evidence suggesting that a lower-bound on our measures often leads to bounded inequality in algorithmic outcomes; hence presenting the first computationally feasible mechanism for bounding individual-level inequality.Comment: Conference: Thirty-second Conference on Neural Information Processing Systems (NIPS 2018

    Artificial Intelligence in Education as a Rawlsian Massively Multiplayer Game : A Thought Experiment on AI Ethics

    Get PDF
    In this chapter, we reflect on the deployment of AI as a pedagogical and educational instrument. When AI enters into classrooms, it becomes as a project with diverse members who have differing stakes, and it produces various socio-cognitive-technological questions that must be discussed. Furthermore, AI is developing fast and renders obsolete old paradigms for, e.g. data access, privacy, and transparency. AI may bring many positive consequences in schools — not only for individuals, or teachers, but for the educational system as a whole. On the other hand, there are also serious risks. Thus, the analysis of the educational uses of AI in future schools pushes us to compare the possible benefits (for example, using AI-based tools for supporting different learners) with the possible risks (for example, the danger of algorithmic manipulation, or a danger of hidden algorithmic discrimination). Practical solutions are many, for example the Solid protocol by Tim Berners-Lee, but are often conceived as solutions to single problems, with limited application. We describe a thought experiment: "education as a massively multiplayer social online game". Here, all actors (humans, institutions, AI agents and algorithms) are required to conform to the definition of a player: which is a role designed to maximise protection and benefit for human players. AI models that understand the game space provide an API for typical algorithms, e.g. deep learning neural nets or reinforcement learning agents, to interact with the game space. Our thought experiment clarifies the steep challenges, and also the opportunity, of AI in education.Peer reviewe

    Improving Identity-Robustness for Face Models

    Full text link
    Despite the success of deep-learning models in many tasks, there have been concerns about such models learning shortcuts, and their lack of robustness to irrelevant confounders. When it comes to models directly trained on human faces, a sensitive confounder is that of human identities. Many face-related tasks should ideally be identity-independent, and perform uniformly across different individuals (i.e. be fair). One way to measure and enforce such robustness and performance uniformity is through enforcing it during training, assuming identity-related information is available at scale. However, due to privacy concerns and also the cost of collecting such information, this is often not the case, and most face datasets simply contain input images and their corresponding task-related labels. Thus, improving identity-related robustness without the need for such annotations is of great importance. Here, we explore using face-recognition embedding vectors, as proxies for identities, to enforce such robustness. We propose to use the structure in the face-recognition embedding space, to implicitly emphasize rare samples within each class. We do so by weighting samples according to their conditional inverse density (CID) in the proxy embedding space. Our experiments suggest that such a simple sample weighting scheme, not only improves the training robustness, it often improves the overall performance as a result of such robustness. We also show that employing such constraints during training results in models that are significantly less sensitive to different levels of bias in the dataset

    Calibrated Fairness in Bandits

    Get PDF
    We study fairness within the stochastic, \emph{multi-armed bandit} (MAB) decision making framework. We adapt the fairness framework of "treating similar individuals similarly" to this setting. Here, an `individual' corresponds to an arm and two arms are `similar' if they have a similar quality distribution. First, we adopt a {\em smoothness constraint} that if two arms have a similar quality distribution then the probability of selecting each arm should be similar. In addition, we define the {\em fairness regret}, which corresponds to the degree to which an algorithm is not calibrated, where perfect calibration requires that the probability of selecting an arm is equal to the probability with which the arm has the best quality realization. We show that a variation on Thompson sampling satisfies smooth fairness for total variation distance, and give an O~((kT)2/3)\tilde{O}((kT)^{2/3}) bound on fairness regret. This complements prior work, which protects an on-average better arm from being less favored. We also explain how to extend our algorithm to the dueling bandit setting.Comment: To be presented at the FAT-ML'17 worksho
    • …
    corecore