10,145 research outputs found

    Algorithms for Social Good: A Study of Fairness and Bias in Automated Data-Driven Decision-Making Systems

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Contributive Justice: An exploration of a wider provision of meaningful work

    Get PDF
    Extreme inequality of opportunity leads to a number of social tensions, inefficiencies and injustices. One issue of increasing concern is the effect inequality is having on people’s fair chances of attaining meaningful work, thus limiting opportunities to make a significant positive contribution to society and reducing the chances of living a flourishing life and developing their potential. On a global scale we can observe an increasingly uneven provision of meaningful work, raising a series of ethical concerns that need detailed examination. The aim of this article is to explore the potential of a normative framework based upon the idea of contributive justice to defend a fairer provision of meaningful work

    Equity of Attention: Amortizing Individual Fairness in Rankings

    Get PDF
    Rankings of people and items are at the heart of selection-making, match-making, and recommender systems, ranging from employment sites to sharing economy platforms. As ranking positions influence the amount of attention the ranked subjects receive, biases in rankings can lead to unfair distribution of opportunities and resources, such as jobs or income. This paper proposes new measures and mechanisms to quantify and mitigate unfairness from a bias inherent to all rankings, namely, the position bias, which leads to disproportionately less attention being paid to low-ranked subjects. Our approach differs from recent fair ranking approaches in two important ways. First, existing works measure unfairness at the level of subject groups while our measures capture unfairness at the level of individual subjects, and as such subsume group unfairness. Second, as no single ranking can achieve individual attention fairness, we propose a novel mechanism that achieves amortized fairness, where attention accumulated across a series of rankings is proportional to accumulated relevance. We formulate the challenge of achieving amortized individual fairness subject to constraints on ranking quality as an online optimization problem and show that it can be solved as an integer linear program. Our experimental evaluation reveals that unfair attention distribution in rankings can be substantial, and demonstrates that our method can improve individual fairness while retaining high ranking quality.Comment: Accepted to SIGIR 201

    Gainsharing and Mutual Monitoring: A Combined Agency-Procedural Justice Interpretation

    Get PDF
    This study examines the behavioral consequences of gainsharing using a combined theoretical framework that includes elements of agency and procedural justice theory. The hypothesis tested is that gainsharing as a collective form of incentive alignment results in increased mutual monitoring among agents (employees) when the plan is perceived to be procedurally fair. The hypothesis was supported in two separate firms using a quasi-experimental field study. The implications of the study for future extensions of agency theory to examine intraorganizational phenomena are discussed

    Algorithmic fairness and structural injustice:Insights from Feminist Political Philosophy

    Get PDF
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in -- ideals of distributive justice, as formulated by political and legal philosophers. The perspectives of feminist political philosophers on social justice, by contrast, have been largely neglected. Some feminist philosophers have criticized the local scope of the paradigm of distributive justice and have proposed corrective amendments to surmount its limitations. The present paper brings some key insights of feminist political philosophy to algorithmic fairness. The paper has three goals. First, I show that algorithmic fairness does not accommodate structural injustices in its current scope. Second, I defend the relevance of structural injustices -- as pioneered in the contemporary philosophical literature by Iris Marion Young -- to algorithmic fairness. Third, I take some steps in developing the paradigm of 'responsible algorithmic fairness' to correct for errors in the current scope and implementation of algorithmic fairness. I close by some reflections of directions for future research

    Implementation Considerations for Mitigating Bias in Supervised Machine Learning

    Get PDF
    Machine Learning (ML) is an important component of computer science and a mainstream way of making sense of large amounts of data. Although the technology is establishing new possibilities in different fields, there are also problems to consider, one of which is bias. Due to the inductive reasoning of ML algorithms in creating mathematical models, the predictions and trends found by the models will never necessarily be true – just more or less probable. Knowing this, it is unreasonable for us to expect the applied deductive reasoning of these models to ever be fully unbiased. Therefore, it is important that we set expectations for ML that account for the limitations of reality. The current conversation of ML regards how and when to implement the technology to mitigate the effect of bias on its results. This thesis suggests that the question of “whether” should be addressed first. We tackle the issue of bias from the standpoint of justice and fairness in ML, developing a framework tasked with determining whether the implementation of a specific ML model is warranted. We accomplish this by emphasizing the liberal values that drive our definitions of societal fairness and justice, such as the separateness of persons, moral evaluation, freedom and understanding of choice, and accountability for wrongdoings

    “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocations

    Get PDF
    Funding for open access publishing: Universidad de Granada/ CBUA. This research is funded by the project “Detección y eliminación de sesgos en algoritmos de triaje y localización para la COVID-19” of the call Ayudas Fundación BBVA a Equipos de Investigación Científica SARS-CoV-2 y COVID-19, en el área de Humanidades. JR also thanks a La Caixa Foundation INPhINIT Retaining Fellowship (LCF/BQ/ DR20/11790005). DR-A thanks the funding of the Spanish Research Agency (codes FFI2017-88913-P and PID2020-118729RB-I00). IPJ also thanks the funding of the Spanish Research Agency (code PID2019-105422GB-I00).The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps to maximize patients’ benefits and optimizes limited resources. However, we claim that the opaqueness of the algorithmic black box and its absence of explainability threatens core commitments of procedural fairness such as accountability, avoidance of bias, and transparency. To illustrate this, we discuss liver transplantation as a case of critical medical resources in which the lack of explainability in AI-based allocation algorithms is procedurally unfair. Finally, we provide a number of ethical recommendations for when considering the use of unexplainable algorithms in the distribution of health-related resources.Funding for open access publishing: Universidad de Granada/ CBUAProject “Detección y eliminación de sesgos en algoritmos de triaje y localización para la COVID-19” of the call Ayudas Fundación BBVA a Equipos de Investigación Científica SARS-CoV-2 y COVID-19, en el área de HumanidadesLa Caixa Foundation INPhINIT Retaining Fellowship (LCF/BQ/ DR20/11790005)Spanish Research Agency (codes FFI2017-88913-P and PID2020-118729RB-I00)Spanish Research Agency (code PID2019-105422GB-I00

    The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default

    Full text link
    In recent years fairness in machine learning (ML) has emerged as a highly active area of research and development. Most define fairness in simple terms, where fairness means reducing gaps in performance or outcomes between demographic groups while preserving as much of the accuracy of the original system as possible. This oversimplification of equality through fairness measures is troubling. Many current fairness measures suffer from both fairness and performance degradation, or "levelling down," where fairness is achieved by making every group worse off, or by bringing better performing groups down to the level of the worst off. When fairness can only be achieved by making everyone worse off in material or relational terms through injuries of stigma, loss of solidarity, unequal concern, and missed opportunities for substantive equality, something would appear to have gone wrong in translating the vague concept of 'fairness' into practice. This paper examines the causes and prevalence of levelling down across fairML, and explore possible justifications and criticisms based on philosophical and legal theories of equality and distributive justice, as well as equality law jurisprudence. We find that fairML does not currently engage in the type of measurement, reporting, or analysis necessary to justify levelling down in practice. We propose a first step towards substantive equality in fairML: "levelling up" systems by design through enforcement of minimum acceptable harm thresholds, or "minimum rate constraints," as fairness constraints. We likewise propose an alternative harms-based framework to counter the oversimplified egalitarian framing currently dominant in the field and push future discussion more towards substantive equality opportunities and away from strict egalitarianism by default. N.B. Shortened abstract, see paper for full abstract

    Machine Performance and Human Failure: How Shall We Regulate Autonomous Machines?

    Get PDF
    corecore