13 research outputs found
Individual Fairness in Hindsight
Since many critical decisions impacting human lives are increasingly being
made by algorithms, it is important to ensure that the treatment of individuals
under such algorithms is demonstrably fair under reasonable notions of
fairness. One compelling notion proposed in the literature is that of
individual fairness (IF), which advocates that similar individuals should be
treated similarly (Dwork et al. 2012). Originally proposed for offline
decisions, this notion does not, however, account for temporal considerations
relevant for online decision-making. In this paper, we extend the notion of IF
to account for the time at which a decision is made, in settings where there
exists a notion of conduciveness of decisions as perceived by the affected
individuals. We introduce two definitions: (i) fairness-across-time (FT) and
(ii) fairness-in-hindsight (FH). FT is the simplest temporal extension of IF
where treatment of individuals is required to be individually fair relative to
the past as well as future, while in FH, we require a one-sided notion of
individual fairness that is defined relative to only the past decisions. We
show that these two definitions can have drastically different implications in
the setting where the principal needs to learn the utility model. Linear regret
relative to optimal individually fair decisions is inevitable under FT for
non-trivial examples. On the other hand, we design a new algorithm: Cautious
Fair Exploration (CaFE), which satisfies FH and achieves sub-linear regret
guarantees for a broad range of settings. We characterize lower bounds showing
that these guarantees are order-optimal in the worst case. FH can thus be
embedded as a primary safeguard against unfair discrimination in algorithmic
deployments, without hindering the ability to take good decisions in the
long-run
Designing Fair AI for Managing Employees in Organizations: A Review, Critique, and Design Agenda
Organizations are rapidly deploying artificial intelligence (AI) systems to manage their workers. However, AI has been found at times to be unfair to workers. Unfairness toward workers has been associated with decreased worker effort and increased worker turnover. To avoid such problems, AI systems must be designed to support fairness and redress instances of unfairness. Despite the attention related to AI unfairness, there has not been a theoretical and systematic approach to developing a design agenda. This paper addresses the issue in three ways. First, we introduce the organizational justice theory, three different fairness types (distributive, procedural, interactional), and the frameworks for redressing instances of unfairness (retributive justice, restorative justice). Second, we review the design literature that specifically focuses on issues of AI fairness in organizations. Third, we propose a design agenda for AI fairness in organizations that applies each of the fairness types to organizational scenarios. Then, the paper concludes with implications for future research.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/153812/4/AI Fairness Final to Online Feb 24 2020.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/153812/1/AI Fairness Final to Online Feb 21 2020.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/153812/6/Robert et al. 2020 AI Fairness New Proof.pdfDescription of AI Fairness Final to Online Feb 24 2020.pdf : Update Preprint Feb 24 2020Description of AI Fairness Final to Online Feb 21 2020.pdf : PreprintDescription of Robert et al. 2020 AI Fairness New Proof.pdf : Corrected Proof Mar 1 202
The Price of Local Fairness in Multistage Selection
International audienc
Trading-off price for data quality to achieve fair online allocation
We consider the problem of online allocation subject to a long-term fairness
penalty. Contrary to existing works, however, we do not assume that the
decision-maker observes the protected attributes -- which is often unrealistic
in practice. Instead they can purchase data that help estimate them from
sources of different quality; and hence reduce the fairness penalty at some
cost. We model this problem as a multi-armed bandit problem where each arm
corresponds to the choice of a data source, coupled with the online allocation
problem. We propose an algorithm that jointly solves both problems and show
that it has a regret bounded by . A key difficulty is
that the rewards received by selecting a source are correlated by the fairness
penalty, which leads to a need for randomization (despite a stochastic
setting). Our algorithm takes into account contextual information available
before the source selection, and can adapt to many different fairness notions.
We also show that in some instances, the estimates used can be learned on the
fly
What-is and How-to for Fairness in Machine Learning: A Survey, Reflection, and Perspective
Algorithmic fairness has attracted increasing attention in the machine
learning community. Various definitions are proposed in the literature, but the
differences and connections among them are not clearly addressed. In this
paper, we review and reflect on various fairness notions previously proposed in
machine learning literature, and make an attempt to draw connections to
arguments in moral and political philosophy, especially theories of justice. We
also consider fairness inquiries from a dynamic perspective, and further
consider the long-term impact that is induced by current prediction and
decision. In light of the differences in the characterized fairness, we present
a flowchart that encompasses implicit assumptions and expected outcomes of
different types of fairness inquiries on the data generating process, on the
predicted outcome, and on the induced impact, respectively. This paper
demonstrates the importance of matching the mission (which kind of fairness one
would like to enforce) and the means (which spectrum of fairness analysis is of
interest, what is the appropriate analyzing scheme) to fulfill the intended
purpose