6,100 research outputs found

    Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

    Full text link
    As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making. Most prior works on algorithmic fairness normatively prescribe how fair decisions ought to be made. In contrast, here, we descriptively survey users for how they perceive and reason about fairness in algorithmic decision making. A key contribution of this work is the framework we propose to understand why people perceive certain features as fair or unfair to be used in algorithms. Our framework identifies eight properties of features, such as relevance, volitionality and reliability, as latent considerations that inform people's moral judgments about the fairness of feature use in decision-making algorithms. We validate our framework through a series of scenario-based surveys with 576 people. We find that, based on a person's assessment of the eight latent properties of a feature in our exemplar scenario, we can accurately (> 85%) predict if the person will judge the use of the feature as fair. Our findings have important implications. At a high-level, we show that people's unfairness concerns are multi-dimensional and argue that future studies need to address unfairness concerns beyond discrimination. At a low-level, we find considerable disagreements in people's fairness judgments. We identify root causes of the disagreements, and note possible pathways to resolve them.Comment: To appear in the Proceedings of the Web Conference (WWW 2018). Code available at https://fate-computing.mpi-sws.org/procedural_fairness

    Like trainer, like bot? Inheritance of bias in algorithmic content moderation

    Get PDF
    The internet has become a central medium through which `networked publics' express their opinions and engage in debate. Offensive comments and personal attacks can inhibit participation in these spaces. Automated content moderation aims to overcome this problem using machine learning classifiers trained on large corpora of texts manually annotated for offence. While such systems could help encourage more civil debate, they must navigate inherently normatively contestable boundaries, and are subject to the idiosyncratic norms of the human raters who provide the training data. An important objective for platforms implementing such measures might be to ensure that they are not unduly biased towards or against particular norms of offence. This paper provides some exploratory methods by which the normative biases of algorithmic content moderation systems can be measured, by way of a case study using an existing dataset of comments labelled for offence. We train classifiers on comments labelled by different demographic subsets (men and women) to understand how differences in conceptions of offence between these groups might affect the performance of the resulting models on various test sets. We conclude by discussing some of the ethical choices facing the implementers of algorithmic moderation systems, given various desired levels of diversity of viewpoints amongst discussion participants.Comment: 12 pages, 3 figures, 9th International Conference on Social Informatics (SocInfo 2017), Oxford, UK, 13--15 September 2017 (forthcoming in Springer Lecture Notes in Computer Science

    The Need for Sensemaking in Networked Privacy and Algorithmic Responsibility

    Get PDF
    This paper proposes that two significant and emerging problems facing our connected, data-driven society may be more effectively solved by being framed as sensemaking challenges. The first is in empowering individuals to take control of their privacy, in device-rich information environments where personal information is fed transparently to complex networks of information brokers. Although sensemaking is often framed as an analytical activity undertaken by experts, due to the fact that non-specialist end-users are now being forced to make expert-like decisions in complex information environments, we argue that it is both appropriate and important to consider sensemaking challenges in this context. The second is in supporting human-in-the-loop algorithmic decision-making, in which important decisions bringing direct consequences for individuals, or indirect consequences for groups, are made with the support of data-driven algorithmic systems. In both privacy and algorithmic decision-making, framing the problems as sensemaking challenges acknowledges complex and illdefined problem structures, and affords the opportunity to view these activities as both building up relevant expertise schemas over time, and being driven potentially by recognition-primed decision making

    Fairness in algorithmic decision systems: a microfinance perspective

    Get PDF

    Towards Responsible Media Recommendation

    Get PDF
    Reading or viewing recommendations are a common feature on modern media sites. What is shown to consumers as recommendations is nowadays often automatically determined by AI algorithms, typically with the goal of helping consumers discover relevant content more easily. However, the highlighting or filtering of information that comes with such recommendations may lead to undesired effects on consumers or even society, for example, when an algorithm leads to the creation of filter bubbles or amplifies the spread of misinformation. These well-documented phenomena create a need for improved mechanisms for responsible media recommendation, which avoid such negative effects of recommender systems. In this research note, we review the threats and challenges that may result from the use of automated media recommendation technology, and we outline possible steps to mitigate such undesired societal effects in the future.publishedVersio

    An Exploratory Study on Fairness-Aware Design Decision-Making

    Get PDF
    With advances in machine learning (ML) and big data analytics, data-driven predictive models play an essential role in supporting a wide range of simple and complex decision-making processes. However, historical data embedded with unfairness may unintentionally reinforce discrimination towards minority groups when using data-driven decision-support technologies. In this paper, we quantify unfairness and analyze its impact in the context of data-driven engineering design using the Adult Income dataset. First, we introduce a fairness-aware design concept. Subsequently, we introduce standard definitions and statistical measures of fairness to the engineering design research. Then, we use the outcomes from two supervised ML models, Logistic Regression and CatBoost classifiers, to conduct the Disparate Impact and fair-test analyses to quantify any unfairness present in the data and decision outcomes. Based on the results, we highlight the importance of considering fairness in product design and marketing, and the consequences, if there is a loss of fairness

    Knowledge management for self-organised resource allocation

    Get PDF
    Many open systems, such as networks, distributed computing and socio-technical systems address a common problem of how to define knowledge management processes to structure and guide decision-making, coordination and learning. While participation is an essential and desirable feature of such systems, the amount of information produced by its individual agents can often be overwhelming and intractable. The challenge, thus, is how to organise and process such information, so it is transformed into productive knowledge used for the resolution of collective action problems. To address this problem, we consider a study of classical Athenian democracy which investigates how the governance model of the city-state flourished. The work suggests that exceptional knowledge management, i.e. making information available for socially productive purposes, played a crucial role in sustaining its democracy for nearly 200 years, by creating processes for aggregation, alignment and codification of knowledge. We therefore examine the proposition that some properties of this historical experience can be generalised and applied to computational systems, so we establish a set of design principles intended to make knowledge management processes open, inclusive, transparent and effective in self-governed social technical systems. We operationalise three of these principles in the context of a collective action situation, namely self-organised common-pool resource allocation, exploring four governance problems: (a) how fairness can be perceived; (b) how resources can be distributed; (c) how policies should be enforced and (d) how tyranny can be opposed. By applying this operationalisation of the design principles for knowledge management processes as a complement to institutional approaches to governance, we demonstrate empirically how it can guide solutions that satisfice shared values, distribute power fairly, apply "common sense" in dealing with rule violations, and protect agents against abuse of power. We conclude by arguing that this approach to the design of open systems can provide the foundations for sustainable and democratic self-governance in socio-technical systems.Open Acces

    Fairness in Information Access Systems

    Get PDF
    Recommendation, information retrieval, and other information access systems pose unique challenges for investigating and applying the fairness and non-discrimination concepts that have been developed for studying other machine learning systems. While fair information access shares many commonalities with fair classification, the multistakeholder nature of information access applications, the rank-based problem setting, the centrality of personalization in many cases, and the role of user response complicate the problem of identifying precisely what types and operationalizations of fairness may be relevant, let alone measuring or promoting them. In this monograph, we present a taxonomy of the various dimensions of fair information access and survey the literature to date on this new and rapidly-growing topic. We preface this with brief introductions to information access and algorithmic fairness, to facilitate use of this work by scholars with experience in one (or neither) of these fields who wish to learn about their intersection. We conclude with several open problems in fair information access, along with some suggestions for how to approach research in this space
    corecore