14,888 research outputs found

    A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity

    Full text link
    We map the recently proposed notions of algorithmic fairness to economic models of Equality of opportunity (EOP)---an extensively studied ideal of fairness in political philosophy. We formally show that through our conceptual mapping, many existing definition of algorithmic fairness, such as predictive value parity and equality of odds, can be interpreted as special cases of EOP. In this respect, our work serves as a unifying moral framework for understanding existing notions of algorithmic fairness. Most importantly, this framework allows us to explicitly spell out the moral assumptions underlying each notion of fairness, and interpret recent fairness impossibility results in a new light. Last but not least and inspired by luck egalitarian models of EOP, we propose a new family of measures for algorithmic fairness. We illustrate our proposal empirically and show that employing a measure of algorithmic (un)fairness when its underlying moral assumptions are not satisfied, can have devastating consequences for the disadvantaged group's welfare

    Implementation Considerations for Mitigating Bias in Supervised Machine Learning

    Get PDF
    Machine Learning (ML) is an important component of computer science and a mainstream way of making sense of large amounts of data. Although the technology is establishing new possibilities in different fields, there are also problems to consider, one of which is bias. Due to the inductive reasoning of ML algorithms in creating mathematical models, the predictions and trends found by the models will never necessarily be true – just more or less probable. Knowing this, it is unreasonable for us to expect the applied deductive reasoning of these models to ever be fully unbiased. Therefore, it is important that we set expectations for ML that account for the limitations of reality. The current conversation of ML regards how and when to implement the technology to mitigate the effect of bias on its results. This thesis suggests that the question of “whether” should be addressed first. We tackle the issue of bias from the standpoint of justice and fairness in ML, developing a framework tasked with determining whether the implementation of a specific ML model is warranted. We accomplish this by emphasizing the liberal values that drive our definitions of societal fairness and justice, such as the separateness of persons, moral evaluation, freedom and understanding of choice, and accountability for wrongdoings
    • …
    corecore