48,200 research outputs found
Algorithmic Jim Crow
This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically driven form of Jim Crow. Under the “separate but equal” discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact
Matching Code and Law: Achieving Algorithmic Fairness with Optimal Transport
Increasingly, discrimination by algorithms is perceived as a societal and
legal problem. As a response, a number of criteria for implementing algorithmic
fairness in machine learning have been developed in the literature. This paper
proposes the Continuous Fairness Algorithm (CFA) which enables a
continuous interpolation between different fairness definitions. More
specifically, we make three main contributions to the existing literature.
First, our approach allows the decision maker to continuously vary between
specific concepts of individual and group fairness. As a consequence, the
algorithm enables the decision maker to adopt intermediate ``worldviews'' on
the degree of discrimination encoded in algorithmic processes, adding nuance to
the extreme cases of ``we're all equal'' (WAE) and ``what you see is what you
get'' (WYSIWYG) proposed so far in the literature. Second, we use optimal
transport theory, and specifically the concept of the barycenter, to maximize
decision maker utility under the chosen fairness constraints. Third, the
algorithm is able to handle cases of intersectionality, i.e., of
multi-dimensional discrimination of certain groups on grounds of several
criteria. We discuss three main examples (credit applications; college
admissions; insurance contracts) and map out the legal and policy implications
of our approach. The explicit formalization of the trade-off between individual
and group fairness allows this post-processing approach to be tailored to
different situational contexts in which one or the other fairness criterion may
take precedence. Finally, we evaluate our model experimentally.Comment: Vastly extended new version, now including computational experiment
What Europe Knows and Thinks About Algorithms Results of a Representative Survey. Bertelsmann Stiftung eupinions February 2019
We live in an algorithmic world. Day by day, each of us is affected by decisions that algorithms make for and about
us – generally without us being aware of or consciously perceiving this. Personalized advertisements in social
media, the invitation to a job interview, the assessment of our creditworthiness – in all these cases, algorithms
already play a significant role – and their importance is growing, day by day.
The algorithmic revolution in our daily lives undoubtedly brings with it great opportunities. Algorithms are masters
at handling complexity. They can manage huge amounts of data quickly and efficiently, processing it consistently
every time. Where humans reach their cognitive limits, find themselves making decisions influenced by the day’s
events or feelings, or let themselves be influenced by existing prejudices, algorithmic systems can be used to
benefit society. For example, according to a study by the Expert Council of German Foundations on Integration and
Migration, automotive mechatronic engineers with Turkish names must submit about 50 percent more applications
than candidates with German names before being invited to an in-person job interview (Schneider, Yemane and
Weinmann 2014). If an algorithm were to make this decision, such discrimination could be prevented. However,
automated decisions also carry significant risks: Algorithms can reproduce existing societal discrimination and
reinforce social inequality, for example, if computers, using historical data as a basis, identify the male gender as
a labor-market success factor, and thus systematically discard job applications from woman, as recently took place
at Amazon (Nickel 2018)
Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction
As algorithms are increasingly used to make important decisions that affect
human lives, ranging from social benefit assignment to predicting risk of
criminal recidivism, concerns have been raised about the fairness of
algorithmic decision making. Most prior works on algorithmic fairness
normatively prescribe how fair decisions ought to be made. In contrast, here,
we descriptively survey users for how they perceive and reason about fairness
in algorithmic decision making.
A key contribution of this work is the framework we propose to understand why
people perceive certain features as fair or unfair to be used in algorithms.
Our framework identifies eight properties of features, such as relevance,
volitionality and reliability, as latent considerations that inform people's
moral judgments about the fairness of feature use in decision-making
algorithms. We validate our framework through a series of scenario-based
surveys with 576 people. We find that, based on a person's assessment of the
eight latent properties of a feature in our exemplar scenario, we can
accurately (> 85%) predict if the person will judge the use of the feature as
fair.
Our findings have important implications. At a high-level, we show that
people's unfairness concerns are multi-dimensional and argue that future
studies need to address unfairness concerns beyond discrimination. At a
low-level, we find considerable disagreements in people's fairness judgments.
We identify root causes of the disagreements, and note possible pathways to
resolve them.Comment: To appear in the Proceedings of the Web Conference (WWW 2018). Code
available at https://fate-computing.mpi-sws.org/procedural_fairness
Gender Equality in Virtual Work I.: Risks
This article focuses on gender equality in virtual work, taking special account of the regulatory
challenges. It contributes to broader debates on the workers' situation in the sharing economy in two
ways. Firstly, it makes an inaugural attempt to evaluate the implications of the new forms of work in
the sharing economy for female virtual workers, looking at the issue of equal treatment. Secondly, it
offers preliminary suggestions regarding a future regulation to improve equality between genders in
virtual work.
The paper is divided into four main parts. The first section defines "virtual work", classifies its two
basic forms and emphasises the specific traits of this form of work to demonstrate the need of special
protection against discrimination. Secondly, the paper identifies the possible beneficial and adverse
implications of virtual work for female workers and gender equality. Thirdly, the paper provides a
summary of the gender equality law of the European Union that serves as a point of reference when
speaking about antidiscrimination law. Section 4 offers three normative perspectives and suggestions
as to how to enhance gender equality in virtual work. Finally, the paper concludes.
This first part of this two-part paper concentrates on the risks of virtual work for equal treatment,
while the second part is going to address the regulatory options and suggestions
State of the Art in Fair ML: From Moral Philosophy and Legislation to Fair Classifiers
Machine learning is becoming an ever present part in our lives as many
decisions, e.g. to lend a credit, are no longer made by humans but by machine
learning algorithms. However those decisions are often unfair and
discriminating individuals belonging to protected groups based on race or
gender. With the recent General Data Protection Regulation (GDPR) coming into
effect, new awareness has been raised for such issues and with computer
scientists having such a large impact on peoples lives it is necessary that
actions are taken to discover and prevent discrimination. This work aims to
give an introduction into discrimination, legislative foundations to counter it
and strategies to detect and prevent machine learning algorithms from showing
such behavior
The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
The nascent field of fair machine learning aims to ensure that decisions
guided by algorithms are equitable. Over the last several years, three formal
definitions of fairness have gained prominence: (1) anti-classification,
meaning that protected attributes---like race, gender, and their proxies---are
not explicitly used to make decisions; (2) classification parity, meaning that
common measures of predictive performance (e.g., false positive and false
negative rates) are equal across groups defined by the protected attributes;
and (3) calibration, meaning that conditional on risk estimates, outcomes are
independent of protected attributes. Here we show that all three of these
fairness definitions suffer from significant statistical limitations. Requiring
anti-classification or classification parity can, perversely, harm the very
groups they were designed to protect; and calibration, though generally
desirable, provides little guarantee that decisions are equitable. In contrast
to these formal fairness criteria, we argue that it is often preferable to
treat similarly risky people similarly, based on the most statistically
accurate estimates of risk that one can produce. Such a strategy, while not
universally applicable, often aligns well with policy objectives; notably, this
strategy will typically violate both anti-classification and classification
parity. In practice, it requires significant effort to construct suitable risk
estimates. One must carefully define and measure the targets of prediction to
avoid retrenching biases in the data. But, importantly, one cannot generally
address these difficulties by requiring that algorithms satisfy popular
mathematical formalizations of fairness. By highlighting these challenges in
the foundation of fair machine learning, we hope to help researchers and
practitioners productively advance the area
- …
