504 research outputs found
Recommender systems fairness evaluation via generalized cross entropy
Fairness in recommender systems has been considered with respect
to sensitive attributes of users (e.g., gender, race) or items (e.g., revenue
in a multistakeholder setting). Regardless, the concept has been
commonly interpreted as some form of equality – i.e., the degree to
which the system is meeting the information needs of all its users in
an equal sense. In this paper, we argue that fairness in recommender
systems does not necessarily imply equality, but instead it should
consider a distribution of resources based on merits and needs.We
present a probabilistic framework based ongeneralized cross entropy
to evaluate fairness of recommender systems under this perspective,
wherewe showthat the proposed framework is flexible and explanatory
by allowing to incorporate domain knowledge (through an ideal
fair distribution) that can help to understand which item or user aspects
a recommendation algorithm is over- or under-representing.
Results on two real-world datasets show the merits of the proposed
evaluation framework both in terms of user and item fairnessThis work was supported in part by the Center for Intelligent Information
Retrieval and in part by project TIN2016-80630-P (MINECO
A flexible framework for evaluating user and item fairness in recommender systems
This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/s11257-020-09285-1One common characteristic of research works focused on fairness evaluation (in machine learning) is that they call for some form of parity (equality) either in treatment—meaning they ignore the information about users’ memberships in protected classes during training—or in impact—by enforcing proportional beneficial outcomes to users in different protected classes. In the recommender systems community, fairness has been studied with respect to both users’ and items’ memberships in protected classes defined by some sensitive attributes (e.g., gender or race for users, revenue in a multi-stakeholder setting for items). Again here, the concept has been commonly interpreted as some form of equality—i.e., the degree to which the system is meeting the information needs of all its users in an equal sense. In this work, we propose a probabilistic framework based on generalized cross entropy (GCE) to measure fairness of a given recommendation model. The framework comes with a suite of advantages: first, it allows the system designer to define and measure fairness for both users and items and can be applied to any classification task; second, it can incorporate various notions of fairness as it does not rely on specific and predefined probability distributions and they can be defined at design time; finally, in its design it uses a gain factor, which can be flexibly defined to contemplate different accuracy-related metrics to measure fairness upon decision-support metrics (e.g., precision, recall) or rank-based measures (e.g., NDCG, MAP). An experimental evaluation on four real-world datasets shows the nuances captured by our proposed metric regarding fairness on different user and item attributes, where nearest-neighbor recommenders tend to obtain good results under equality constraints. We observed that when the users are clustered based on both their interaction with the system and other sensitive attributes, such as age or gender, algorithms with similar performance values get different behaviors with respect to user fairness due to the different way they process data for each user clusterThe authors thank the reviewers for their thoughtful comments and suggestions. This
work was supported in part by the Ministerio de Ciencia, Innovacion y Universidades (Reference: 123496 Y. Deldjoo et al. PID2019-108965GB-I00) and in part by the Center for Intelligent Information Retrieval. Any opinions, findings and conclusions or
recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor
Consumer-side Fairness in Recommender Systems: A Systematic Survey of Methods and Evaluation
In the current landscape of ever-increasing levels of digitalization, we are
facing major challenges pertaining to scalability. Recommender systems have
become irreplaceable both for helping users navigate the increasing amounts of
data and, conversely, aiding providers in marketing products to interested
users. The growing awareness of discrimination in machine learning methods has
recently motivated both academia and industry to research how fairness can be
ensured in recommender systems. For recommender systems, such issues are well
exemplified by occupation recommendation, where biases in historical data may
lead to recommender systems relating one gender to lower wages or to the
propagation of stereotypes. In particular, consumer-side fairness, which
focuses on mitigating discrimination experienced by users of recommender
systems, has seen a vast number of diverse approaches for addressing different
types of discrimination. The nature of said discrimination depends on the
setting and the applied fairness interpretation, of which there are many
variations. This survey serves as a systematic overview and discussion of the
current research on consumer-side fairness in recommender systems. To that end,
a novel taxonomy based on high-level fairness interpretation is proposed and
used to categorize the research and their proposed fairness evaluation metrics.
Finally, we highlight some suggestions for the future direction of the field.Comment: Draft submitted to Springer (November 2022
Understanding and Mitigating Multi-sided Exposure Bias in Recommender Systems
Fairness is a critical system-level objective in recommender systems that has
been the subject of extensive recent research. It is especially important in
multi-sided recommendation platforms where it may be crucial to optimize
utilities not just for the end user, but also for other actors such as item
sellers or producers who desire a fair representation of their items. Existing
solutions do not properly address various aspects of multi-sided fairness in
recommendations as they may either solely have one-sided view (i.e. improving
the fairness only for one side), or do not appropriately measure the fairness
for each actor involved in the system. In this thesis, I aim at first
investigating the impact of unfair recommendations on the system and how these
unfair recommendations can negatively affect major actors in the system. Then,
I seek to propose solutions to tackle the unfairness of recommendations. I
propose a rating transformation technique that works as a pre-processing step
before building the recommendation model to alleviate the inherent popularity
bias in the input data and consequently to mitigate the exposure unfairness for
items and suppliers in the recommendation lists. Also, as another solution, I
propose a general graph-based solution that works as a post-processing approach
after recommendation generation for mitigating the multi-sided exposure bias in
the recommendation results. For evaluation, I introduce several metrics for
measuring the exposure fairness for items and suppliers, and show that these
metrics better capture the fairness properties in the recommendation results. I
perform extensive experiments to evaluate the effectiveness of the proposed
solutions. The experiments on different publicly-available datasets and
comparison with various baselines confirm the superiority of the proposed
solutions in improving the exposure fairness for items and suppliers.Comment: Doctoral thesi
Counterfactual Explanation for Fairness in Recommendation
Fairness-aware recommendation eliminates discrimination issues to build
trustworthy recommendation systems.Explaining the causes of unfair
recommendations is critical, as it promotes fairness diagnostics, and thus
secures users' trust in recommendation models. Existing fairness explanation
methods suffer high computation burdens due to the large-scale search space and
the greedy nature of the explanation search process. Besides, they perform
score-based optimizations with continuous values, which are not applicable to
discrete attributes such as gender and race. In this work, we adopt the novel
paradigm of counterfactual explanation from causal inference to explore how
minimal alterations in explanations change model fairness, to abandon the
greedy search for explanations. We use real-world attributes from Heterogeneous
Information Networks (HINs) to empower counterfactual reasoning on discrete
attributes. We propose a novel Counterfactual Explanation for Fairness
(CFairER) that generates attribute-level counterfactual explanations from HINs
for recommendation fairness. Our CFairER conducts off-policy reinforcement
learning to seek high-quality counterfactual explanations, with an attentive
action pruning reducing the search space of candidate counterfactuals. The
counterfactual explanations help to provide rational and proximate explanations
for model fairness, while the attentive action pruning narrows the search space
of attributes. Extensive experiments demonstrate our proposed model can
generate faithful explanations while maintaining favorable recommendation
performance
Recommended from our members
Modeling the Dynamics of Consumer Behavior from Massive Interaction Data
Recent technological innovations (e.g. e-commerce platforms, automated retail stores) have enabled dramatic changes in people's shopping experiences, as well as the accessibility to incredible volumes of consumer-product interaction data. As a result, machine learning (ML) systems can be widely developed to help people navigate relevant information and make decisions. Traditional ML systems have achieved great success on various well-defined problems such as speech recognition and facial recognition. Unlike these tasks where datasets and objectives are clearly benchmarked, modeling consumer behavior can be rather complicated; for example, consumer activities can be affected by real-time shopping contexts, collected interaction data can be noisy and biased, interests from multiple parties (both consumers and producers) can be involved in the predictive objectives.The primary goal of this dissertation is to address the obstacles in modeling consumer activities through computational approaches, but with careful considerations from economic and societal perspectives. Intellectually, such models help us to understand the forces that guide consumer behavior. Methodologically, I build algorithms capable of processing massive interaction datasets by connecting well-developed ML techniques and well-established economic theories. Practically, my work has applications ranging from recommender systems, e-commerce and business intelligence
- …