146 research outputs found
iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making
People are rated and ranked, towards algorithmic decision making in an
increasing number of applications, typically based on machine learning.
Research on how to incorporate fairness into such tasks has prevalently pursued
the paradigm of group fairness: giving adequate success rates to specifically
protected groups. In contrast, the alternative paradigm of individual fairness
has received relatively little attention, and this paper advances this less
explored direction. The paper introduces a method for probabilistically mapping
user records into a low-rank representation that reconciles individual fairness
and the utility of classifiers and rankings in downstream applications. Our
notion of individual fairness requires that users who are similar in all
task-relevant attributes such as job qualification, and disregarding all
potentially discriminating attributes such as gender, should have similar
outcomes. We demonstrate the versatility of our method by applying it to
classification and learning-to-rank tasks on a variety of real-world datasets.
Our experiments show substantial improvements over the best prior work for this
setting.Comment: Accepted at ICDE 2019. Please cite the ICDE 2019 proceedings versio
Equity of Attention: Amortizing Individual Fairness in Rankings
Rankings of people and items are at the heart of selection-making,
match-making, and recommender systems, ranging from employment sites to sharing
economy platforms. As ranking positions influence the amount of attention the
ranked subjects receive, biases in rankings can lead to unfair distribution of
opportunities and resources, such as jobs or income.
This paper proposes new measures and mechanisms to quantify and mitigate
unfairness from a bias inherent to all rankings, namely, the position bias,
which leads to disproportionately less attention being paid to low-ranked
subjects. Our approach differs from recent fair ranking approaches in two
important ways. First, existing works measure unfairness at the level of
subject groups while our measures capture unfairness at the level of individual
subjects, and as such subsume group unfairness. Second, as no single ranking
can achieve individual attention fairness, we propose a novel mechanism that
achieves amortized fairness, where attention accumulated across a series of
rankings is proportional to accumulated relevance.
We formulate the challenge of achieving amortized individual fairness subject
to constraints on ranking quality as an online optimization problem and show
that it can be solved as an integer linear program. Our experimental evaluation
reveals that unfair attention distribution in rankings can be substantial, and
demonstrates that our method can improve individual fairness while retaining
high ranking quality.Comment: Accepted to SIGIR 201
A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity
We map the recently proposed notions of algorithmic fairness to economic
models of Equality of opportunity (EOP)---an extensively studied ideal of
fairness in political philosophy. We formally show that through our conceptual
mapping, many existing definition of algorithmic fairness, such as predictive
value parity and equality of odds, can be interpreted as special cases of EOP.
In this respect, our work serves as a unifying moral framework for
understanding existing notions of algorithmic fairness. Most importantly, this
framework allows us to explicitly spell out the moral assumptions underlying
each notion of fairness, and interpret recent fairness impossibility results in
a new light. Last but not least and inspired by luck egalitarian models of EOP,
we propose a new family of measures for algorithmic fairness. We illustrate our
proposal empirically and show that employing a measure of algorithmic
(un)fairness when its underlying moral assumptions are not satisfied, can have
devastating consequences for the disadvantaged group's welfare
Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
We draw attention to an important, yet largely overlooked aspect of
evaluating fairness for automated decision making systems---namely risk and
welfare considerations. Our proposed family of measures corresponds to the
long-established formulations of cardinal social welfare in economics, and is
justified by the Rawlsian conception of fairness behind a veil of ignorance.
The convex formulation of our welfare-based measures of fairness allows us to
integrate them as a constraint into any convex loss minimization pipeline. Our
empirical analysis reveals interesting trade-offs between our proposal and (a)
prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of
individual fairness. Furthermore and perhaps most importantly, our work
provides both heuristic justification and empirical evidence suggesting that a
lower-bound on our measures often leads to bounded inequality in algorithmic
outcomes; hence presenting the first computationally feasible mechanism for
bounding individual-level inequality.Comment: Conference: Thirty-second Conference on Neural Information Processing
Systems (NIPS 2018
Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction
As algorithms are increasingly used to make important decisions that affect
human lives, ranging from social benefit assignment to predicting risk of
criminal recidivism, concerns have been raised about the fairness of
algorithmic decision making. Most prior works on algorithmic fairness
normatively prescribe how fair decisions ought to be made. In contrast, here,
we descriptively survey users for how they perceive and reason about fairness
in algorithmic decision making.
A key contribution of this work is the framework we propose to understand why
people perceive certain features as fair or unfair to be used in algorithms.
Our framework identifies eight properties of features, such as relevance,
volitionality and reliability, as latent considerations that inform people's
moral judgments about the fairness of feature use in decision-making
algorithms. We validate our framework through a series of scenario-based
surveys with 576 people. We find that, based on a person's assessment of the
eight latent properties of a feature in our exemplar scenario, we can
accurately (> 85%) predict if the person will judge the use of the feature as
fair.
Our findings have important implications. At a high-level, we show that
people's unfairness concerns are multi-dimensional and argue that future
studies need to address unfairness concerns beyond discrimination. At a
low-level, we find considerable disagreements in people's fairness judgments.
We identify root causes of the disagreements, and note possible pathways to
resolve them.Comment: To appear in the Proceedings of the Web Conference (WWW 2018). Code
available at https://fate-computing.mpi-sws.org/procedural_fairness
Equality of Voice: Towards Fair Representation in Crowdsourced Top-K Recommendations
To help their users to discover important items at a particular time, major
websites like Twitter, Yelp, TripAdvisor or NYTimes provide Top-K
recommendations (e.g., 10 Trending Topics, Top 5 Hotels in Paris or 10 Most
Viewed News Stories), which rely on crowdsourced popularity signals to select
the items. However, different sections of a crowd may have different
preferences, and there is a large silent majority who do not explicitly express
their opinion. Also, the crowd often consists of actors like bots, spammers, or
people running orchestrated campaigns. Recommendation algorithms today largely
do not consider such nuances, hence are vulnerable to strategic manipulation by
small but hyper-active user groups.
To fairly aggregate the preferences of all users while recommending top-K
items, we borrow ideas from prior research on social choice theory, and
identify a voting mechanism called Single Transferable Vote (STV) as having
many of the fairness properties we desire in top-K item (s)elections. We
develop an innovative mechanism to attribute preferences of silent majority
which also make STV completely operational. We show the generalizability of our
approach by implementing it on two different real-world datasets. Through
extensive experimentation and comparison with state-of-the-art techniques, we
show that our proposed approach provides maximum user satisfaction, and cuts
down drastically on items disliked by most but hyper-actively promoted by a few
users.Comment: In the proceedings of the Conference on Fairness, Accountability, and
Transparency (FAT* '19). Please cite the conference versio
Characterizing Information Diets of Social Media Users
With the widespread adoption of social media sites like Twitter and Facebook,
there has been a shift in the way information is produced and consumed.
Earlier, the only producers of information were traditional news organizations,
which broadcast the same carefully-edited information to all consumers over
mass media channels. Whereas, now, in online social media, any user can be a
producer of information, and every user selects which other users she connects
to, thereby choosing the information she consumes. Moreover, the personalized
recommendations that most social media sites provide also contribute towards
the information consumed by individual users. In this work, we define a concept
of information diet -- which is the topical distribution of a given set of
information items (e.g., tweets) -- to characterize the information produced
and consumed by various types of users in the popular Twitter social media. At
a high level, we find that (i) popular users mostly produce very specialized
diets focusing on only a few topics; in fact, news organizations (e.g.,
NYTimes) produce much more focused diets on social media as compared to their
mass media diets, (ii) most users' consumption diets are primarily focused
towards one or two topics of their interest, and (iii) the personalized
recommendations provided by Twitter help to mitigate some of the topical
imbalances in the users' consumption diets, by adding information on diverse
topics apart from the users' primary topics of interest.Comment: In Proceeding of International AAAI Conference on Web and Social
Media (ICWSM), Oxford, UK, May 201
Incremental Fairness in Two-Sided Market Platforms: On Smoothly Updating Recommendations
Major online platforms today can be thought of as two-sided markets with
producers and customers of goods and services. There have been concerns that
over-emphasis on customer satisfaction by the platforms may affect the
well-being of the producers. To counter such issues, few recent works have
attempted to incorporate fairness for the producers. However, these studies
have overlooked an important issue in such platforms -- to supposedly improve
customer utility, the underlying algorithms are frequently updated, causing
abrupt changes in the exposure of producers. In this work, we focus on the
fairness issues arising out of such frequent updates, and argue for incremental
updates of the platform algorithms so that the producers have enough time to
adjust (both logistically and mentally) to the change. However, naive
incremental updates may become unfair to the customers. Thus focusing on
recommendations deployed on two-sided platforms, we formulate an ILP based
online optimization to deploy changes incrementally in n steps, where we can
ensure smooth transition of the exposure of items while guaranteeing a minimum
utility for every customer. Evaluations over multiple real world datasets show
that our proposed mechanism for platform updates can be efficient and fair to
both the producers and the customers in two-sided platforms.Comment: To Appear In the Proceedings of 34th AAAI Conference on Artificial
Intelligence (AAAI), New York, USA, Feb 202
Understanding and Specifying Social Access Control Lists
Online social network (OSN) users upload millions of pieces of contenttoshare with otherseveryday. While asignificant portionofthiscontentis benign(andistypicallysharedwith all friends or all OSN users), there are certain pieces of content that are highly privacy sensitive. Sharing such sensitive content raises significant privacy concerns for users, and it becomes important for the user to protect this content from being exposed to the wrong audience. Today, most OSN services provide fine-grained mechanisms for specifying social access control lists (social ACLs, or SACLs), allowing users to restrict their sensitive content to a select subset of their friends. However, it remains unclear how these SACL mechanisms are used today. To design better privacy management tools for users, we need to first understand the usage and complexity of SACLs specified by users. In this paper, we present the first large-scale study of finegrained privacy preferences of over 1,000 users on Facebook, providing us with the first ground-truth information on how users specify SACLs on a social networking service. Overall, we find that a surprisingly large fraction (17.6%) of content is shared with SACLs. However, we also find that the SACL membership shows little correlation with either profile information or social network links; as a result, it is difficult to predict the subset of a user’s friends likely to appear in a SACL. On the flip side, we find that SACLs are often reused, suggesting that simply making recent SACLs available to users is likely tosignificantly reduce the burdenof privacy management on users. 1
- …