Article thumbnail

Questioning the assumptions behind fairness solutions

By Rebekah Overdorf, Bogdan Kulynych, Ero Balsa, Carmela Troncoso and Seda Gürses

Abstract

In addition to their benefits, optimization systems can have negative economic, moral, social, and political effects on populations as well as their environments. Frameworks like fairness have been proposed to aid service providers in addressing subsequent bias and discrimination during data collection and algorithm design. However, recent reports of neglect, unresponsiveness, and malevolence cast doubt on whether service providers can effectively implement fairness solutions. These reports invite us to revisit assumptions made about the service providers in fairness solutions. Namely, that service providers have (i) the incentives or (ii) the means to mitigate optimization externalities. Moreover, the environmental impact of these systems suggests that we need (iii) novel frameworks that consider systems other than algorithmic decision-making and recommender systems, and (iv) solutions that go beyond removing related algorithmic biases. Going forward, we propose Protective Optimization Technologies that enable optimization subjects to defend against negative consequences of optimization systems.Comment: Presented at Critiquing and Correcting Trends in Machine Learning (NeurIPS 2018 Workshop), Montreal, Canada. This is a short version of arXiv:1806.0271

Topics: Computer Science - Computers and Society, Computer Science - Machine Learning
Year: 2018
OAI identifier: oai:arXiv.org:1811.11293

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.

Suggested articles