32 research outputs found
How to Perform Reproducible Experiments in the ELLIOT Recommendation Framework: Data Processing, Model Selection, and Performance Evaluation
Recommender Systems have shown to be an efective way to alleviate the over-choice problem and provide
accurate and tailored recommendations. However, the impressive number of proposed recommendation
algorithms, splitting strategies, evaluation protocols, metrics, and tasks, has made rigorous experimental
evaluation particularly challenging. ELLIOT is a comprehensive recommendation framework that aims
to run and reproduce an entire experimental pipeline by processing a simple confguration fle. The
framework loads, flters, and splits the data considering a vast set of strategies. Then, it optimizes
hyperparameters for several recommendation algorithms, selects the best models, compares them with
the baselines, computes metrics spanning from accuracy to beyond-accuracy, bias, and fairness, and
conducts statistical analysis. The aim is to provide researchers a tool to ease all the experimental
evaluation phases (and make them reproducible), from data reading to results collection. ELLIOT is
freely available on GitHub at https://github.com/sisinflab/ellio
Incorporating System-Level Objectives into Recommender Systems
One of the most essential parts of any recommender system is
personalization-- how acceptable the recommendations are from the user's
perspective. However, in many real-world applications, there are other
stakeholders whose needs and interests should be taken into account. In this
work, we define the problem of multistakeholder recommendation and we focus on
finding algorithms for a special case where the recommender system itself is
also a stakeholder. In addition, we will explore the idea of incremental
incorporation of system-level objectives into recommender systems over time to
tackle the existing problems in the optimization techniques which only look for
optimizing the individual users' lists.Comment: arXiv admin note: text overlap with arXiv:1901.0755
Modeling and Counteracting Exposure Bias in Recommender Systems
What we discover and see online, and consequently our opinions and decisions,
are becoming increasingly affected by automated machine learned predictions.
Similarly, the predictive accuracy of learning machines heavily depends on the
feedback data that we provide them. This mutual influence can lead to
closed-loop interactions that may cause unknown biases which can be exacerbated
after several iterations of machine learning predictions and user feedback.
Machine-caused biases risk leading to undesirable social effects ranging from
polarization to unfairness and filter bubbles.
In this paper, we study the bias inherent in widely used recommendation
strategies such as matrix factorization. Then we model the exposure that is
borne from the interaction between the user and the recommender system and
propose new debiasing strategies for these systems.
Finally, we try to mitigate the recommendation system bias by engineering
solutions for several state of the art recommender system models.
Our results show that recommender systems are biased and depend on the prior
exposure of the user. We also show that the studied bias iteratively decreases
diversity in the output recommendations. Our debiasing method demonstrates the
need for alternative recommendation strategies that take into account the
exposure process in order to reduce bias.
Our research findings show the importance of understanding the nature of and
dealing with bias in machine learning models such as recommender systems that
interact directly with humans, and are thus causing an increasing influence on
human discovery and decision makingComment: 9 figures and one table. The paper has 5 page