1,666 research outputs found
Bias Disparity in Collaborative Recommendation: Algorithmic Evaluation and Comparison
Research on fairness in machine learning has been recently extended to
recommender systems. One of the factors that may impact fairness is bias
disparity, the degree to which a group's preferences on various item categories
fail to be reflected in the recommendations they receive. In some cases biases
in the original data may be amplified or reversed by the underlying
recommendation algorithm. In this paper, we explore how different
recommendation algorithms reflect the tradeoff between ranking quality and bias
disparity. Our experiments include neighborhood-based, model-based, and
trust-aware recommendation algorithms.Comment: Workshop on Recommendation in Multi-Stakeholder Environments (RMSE)
at ACM RecSys 2019, Copenhagen, Denmar
FATREC Workshop on Responsible Recommendation Proceedings
We sought with this workshop, to foster a discussion of various topics that fall under the general umbrella of responsible recommendation: ethical considerations in recommendation, bias and discrimination in recommender systems, transparency and accountability, social impact of recommenders, user privacy, and other related concerns. Our goal was to encourage the community to think about how we build and study recommender systems in a socially-responsible manner.
Recommendation systems are increasingly impacting people\u27s decisions in different walks of life including commerce, employment, dating, health, education and governance. As the impact and scope of recommendations increase, developing systems that tackle issues of fairness, transparency and accountability becomes important. This workshop was held in the spirit of FATML (Fairness, Accountability, and Transparency in Machine Learning), DAT (Data and Algorithmic Transparency), and similar workshops in related communities. With Responsible Recommendation , we brought that conversation to RecSys
Fairness-Aware Graph Neural Networks: A Survey
Graph Neural Networks (GNNs) have become increasingly important due to their
representational power and state-of-the-art predictive performance on many
fundamental learning tasks. Despite this success, GNNs suffer from fairness
issues that arise as a result of the underlying graph data and the fundamental
aggregation mechanism that lies at the heart of the large class of GNN models.
In this article, we examine and categorize fairness techniques for improving
the fairness of GNNs. Previous work on fair GNN models and techniques are
discussed in terms of whether they focus on improving fairness during a
preprocessing step, during training, or in a post-processing phase.
Furthermore, we discuss how such techniques can be used together whenever
appropriate, and highlight the advantages and intuition as well. We also
introduce an intuitive taxonomy for fairness evaluation metrics including
graph-level fairness, neighborhood-level fairness, embedding-level fairness,
and prediction-level fairness metrics. In addition, graph datasets that are
useful for benchmarking the fairness of GNN models are summarized succinctly.
Finally, we highlight key open problems and challenges that remain to be
addressed
DeepFair: Deep Learning for Improving Fairness in Recommender Systems
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations. Moreover, the trade-off between equity and precision makes it difficult to obtain recommendations that meet both criteria. Here we propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy. Furthermore, in the recommendation stage, this balance does not require an initial knowledge of the usersâ demographic information. The proposed architecture incorporates four abstraction levels: raw ratings and demographic information, minority indexes, accurate predictions, and fair recommendations. Last two levels use the classical Probabilistic Matrix Factorization (PMF) model to obtain users and items hidden factors, and a Multi-Layer Network (MLN) to combine those factors with a âfairnessâ (Ă) parameter. Several experiments have been conducted using two types of minority sets: gender and age. Experimental results show that it is possible to make fair recommendations without losing a significant proportion of accuracy
- âŠ