26 research outputs found

    Exploring Author Gender in Book Rating and Recommendation

    Get PDF
    Collaborative filtering algorithms find useful patterns in rating and consumption data and exploit these patterns to guide users to good items. Many of the patterns in rating datasets reflect important real-world differences between the various users and items in the data; other patterns may be irrelevant or possibly undesirable for social or ethical reasons, particularly if they reflect undesired discrimination, such as gender or ethnic discrimination in publishing. In this work, we examine the response of collaborative filtering recommender algorithms to the distribution of their input data with respect to a dimension of social concern, namely content creator gender. Using publicly-available book ratings data, we measure the distribution of the genders of the authors of books in user rating profiles and recommendation lists produced from this data. We find that common collaborative filtering algorithms differ in the gender distribution of their recommendation lists, and in the relationship of that output distribution to user profile distribution

    DeBayes : a Bayesian method for debiasing network embeddings

    Get PDF
    As machine learning algorithms are increasingly deployed for high-impact automated decision making, ethical and increasingly also legal standards demand that they treat all individuals fairly, without discrimination based on their age, gender, race or other sensitive traits. In recent years much progress has been made on ensuring fairness and reducing bias in standard machine learning settings. Yet, for network embedding, with applications in vulnerable domains ranging from social network analysis to recommender systems, current options remain limited both in number and performance. We thus propose DeBayes: a conceptually elegant Bayesian method that is capable of learning debiased embeddings by using a biased prior. Our experiments show that these representations can then be used to perform link prediction that is significantly more fair in terms of popular metrics such as demographic parity and equalized opportunity

    Measuring Fairness in Ranked Results: An Analytical and Empirical Comparison

    Get PDF
    Information access systems, such as search and recommender systems, often use ranked lists to present results believed to be relevant to the user\u27s information need. Evaluating these lists for their fairness along with other traditional metrics provides a more complete understanding of an information access system\u27s behavior beyond accuracy or utility constructs. To measure the (un)fairness of rankings, particularly with respect to the protected group(s) of producers or providers, several metrics have been proposed in the last several years. However, an empirical and comparative analyses of these metrics showing the applicability to specific scenario or real data, conceptual similarities, and differences is still lacking. We aim to bridge the gap between theoretical and practical application of these metrics. In this paper we describe several fair ranking metrics from the existing literature in a common notation, enabling direct comparison of their approaches and assumptions, and empirically compare them on the same experimental setup and data sets in the context of three information access tasks. We also provide a sensitivity analysis to assess the impact of the design choices and parameter settings that go in to these metrics and point to additional work needed to improve fairness measurement

    DeepFair: Deep Learning for Improving Fairness in Recommender Systems

    Get PDF
    The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations. Moreover, the trade-off between equity and precision makes it difficult to obtain recommendations that meet both criteria. Here we propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy. Furthermore, in the recommendation stage, this balance does not require an initial knowledge of the users’ demographic information. The proposed architecture incorporates four abstraction levels: raw ratings and demographic information, minority indexes, accurate predictions, and fair recommendations. Last two levels use the classical Probabilistic Matrix Factorization (PMF) model to obtain users and items hidden factors, and a Multi-Layer Network (MLN) to combine those factors with a ‘fairness’ (ß) parameter. Several experiments have been conducted using two types of minority sets: gender and age. Experimental results show that it is possible to make fair recommendations without losing a significant proportion of accuracy
    corecore