48,866 research outputs found

    Algorithmic Jim Crow

    Get PDF
    This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically driven form of Jim Crow. Under the “separate but equal” discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact

    Matching Code and Law: Achieving Algorithmic Fairness with Optimal Transport

    Full text link
    Increasingly, discrimination by algorithms is perceived as a societal and legal problem. As a response, a number of criteria for implementing algorithmic fairness in machine learning have been developed in the literature. This paper proposes the Continuous Fairness Algorithm (CFAθ\theta) which enables a continuous interpolation between different fairness definitions. More specifically, we make three main contributions to the existing literature. First, our approach allows the decision maker to continuously vary between specific concepts of individual and group fairness. As a consequence, the algorithm enables the decision maker to adopt intermediate ``worldviews'' on the degree of discrimination encoded in algorithmic processes, adding nuance to the extreme cases of ``we're all equal'' (WAE) and ``what you see is what you get'' (WYSIWYG) proposed so far in the literature. Second, we use optimal transport theory, and specifically the concept of the barycenter, to maximize decision maker utility under the chosen fairness constraints. Third, the algorithm is able to handle cases of intersectionality, i.e., of multi-dimensional discrimination of certain groups on grounds of several criteria. We discuss three main examples (credit applications; college admissions; insurance contracts) and map out the legal and policy implications of our approach. The explicit formalization of the trade-off between individual and group fairness allows this post-processing approach to be tailored to different situational contexts in which one or the other fairness criterion may take precedence. Finally, we evaluate our model experimentally.Comment: Vastly extended new version, now including computational experiment

    What Europe Knows and Thinks About Algorithms Results of a Representative Survey. Bertelsmann Stiftung eupinions February 2019

    Get PDF
    We live in an algorithmic world. Day by day, each of us is affected by decisions that algorithms make for and about us – generally without us being aware of or consciously perceiving this. Personalized advertisements in social media, the invitation to a job interview, the assessment of our creditworthiness – in all these cases, algorithms already play a significant role – and their importance is growing, day by day. The algorithmic revolution in our daily lives undoubtedly brings with it great opportunities. Algorithms are masters at handling complexity. They can manage huge amounts of data quickly and efficiently, processing it consistently every time. Where humans reach their cognitive limits, find themselves making decisions influenced by the day’s events or feelings, or let themselves be influenced by existing prejudices, algorithmic systems can be used to benefit society. For example, according to a study by the Expert Council of German Foundations on Integration and Migration, automotive mechatronic engineers with Turkish names must submit about 50 percent more applications than candidates with German names before being invited to an in-person job interview (Schneider, Yemane and Weinmann 2014). If an algorithm were to make this decision, such discrimination could be prevented. However, automated decisions also carry significant risks: Algorithms can reproduce existing societal discrimination and reinforce social inequality, for example, if computers, using historical data as a basis, identify the male gender as a labor-market success factor, and thus systematically discard job applications from woman, as recently took place at Amazon (Nickel 2018)

    Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

    Full text link
    As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making. Most prior works on algorithmic fairness normatively prescribe how fair decisions ought to be made. In contrast, here, we descriptively survey users for how they perceive and reason about fairness in algorithmic decision making. A key contribution of this work is the framework we propose to understand why people perceive certain features as fair or unfair to be used in algorithms. Our framework identifies eight properties of features, such as relevance, volitionality and reliability, as latent considerations that inform people's moral judgments about the fairness of feature use in decision-making algorithms. We validate our framework through a series of scenario-based surveys with 576 people. We find that, based on a person's assessment of the eight latent properties of a feature in our exemplar scenario, we can accurately (> 85%) predict if the person will judge the use of the feature as fair. Our findings have important implications. At a high-level, we show that people's unfairness concerns are multi-dimensional and argue that future studies need to address unfairness concerns beyond discrimination. At a low-level, we find considerable disagreements in people's fairness judgments. We identify root causes of the disagreements, and note possible pathways to resolve them.Comment: To appear in the Proceedings of the Web Conference (WWW 2018). Code available at https://fate-computing.mpi-sws.org/procedural_fairness
    corecore