1,439 research outputs found
Cold-start Problem in Collaborative Recommender Systems: Efficient Methods Based on Ask-to-rate Technique
To develop a recommender system, the collaborative filtering is the best known approach, which considers the ratings of users who have similar rating profiles or rating patterns. Consistently, it is able to compute the similarity of users when there are enough ratings expressed by users. Therefore, a major challenge of the collaborative filtering approach can be how to make recommendations for a new user, that is called cold-start user problem. To solve this problem, there have been proposed a few efficient methods based on ask-to-rate technique in which the profile of a new user is made by integrating information gained from a quick interview. This paper is a review of these proposed methods and how to use the ask-to-rate technique. Consequently, they are categorized into non-adaptive and adaptive methods. Then, each category is analyzed and their methods are compared
Improving Recommendation Quality by Merging Collaborative Filtering and Social Relationships
Matrix Factorization techniques have been successfully applied to raise the quality of suggestions generated\ud
by Collaborative Filtering Systems (CFSs). Traditional CFSs\ud
based on Matrix Factorization operate on the ratings provided\ud
by users and have been recently extended to incorporate\ud
demographic aspects such as age and gender. In this paper we\ud
propose to merge CF techniques based on Matrix Factorization\ud
and information regarding social friendships in order to\ud
provide users with more accurate suggestions and rankings\ud
on items of their interest. The proposed approach has been\ud
evaluated on a real-life online social network; the experimental\ud
results show an improvement against existing CF approaches.\ud
A detailed comparison with related literature is also presen
UTSP: User-Based Two-Step Recommendation with Popularity Normalization towards Diversity and Novelty
© 2013 IEEE. Information technologies such as e-commerce and e-news bring overloaded information as well as convenience to users, cooperatives and companies. Recommender system is a significant technology in solving this information overload problem. Due to the outstanding accuracy performance in top-N recommendation tasks, two-step recommendation algorithms are suitable to generate recommendations. However, their recommendation lists are biased towards popular items. In this paper, we propose a user based two-step recommendation algorithm with popularity normalization to improve recommendation diversity and novelty, as well as two evaluation metrics to measure diverse and novel performance. Experimental results demonstrate that our proposed approach significantly improves the diversity and novelty performance while still inheriting the advantage of two-step recommendation approaches on accuracy metrics
Modeling and counteracting exposure bias in recommender systems.
Recommender systems are becoming widely used in everyday life. They use machine learning algorithms which learn to predict our preferences and thus influence our choices among a staggering array of options online, such as movies, books, products, and even news articles. Thus what we discover and see online, and consequently our opinions and decisions, are becoming increasingly affected by automated predictions made by learning machines. Similarly, the predictive accuracy of these learning machines heavily depends on the feedback data, such as ratings and clicks, that we provide them. This mutual influence can lead to closed-loop interactions that may cause unknown biases which can be exacerbated after several iterations of machine learning predictions and user feedback. Such machine-caused biases risk leading to undesirable social effects such as polarization, unfairness, and filter bubbles. In this research, we aim to study the bias inherent in widely used recommendation strategies such as matrix factorization and its impact on the diversity of the recommendations. We also aim to develop probabilistic models of the bias that is borne from the interaction between the user and the recommender system and to develop debiasing strategies for these systems. We present a theoretical framework that can model the behavioral process of the user by considering item exposure before user interaction with the model. We also track diversity metrics to measure the bias that is generated in recommender systems, and thus study their effect throughout the iterations. Finally, we try to mitigate the recommendation system bias by engineering solutions for several state of the art recommender system models. Our results show that recommender systems are biased and depend on the prior exposure of the user. We also show that the studied bias iteratively decreases diversity in the output recommendations. Our debiasing method demonstrates the need for alternative recommendation strategies that take into account the exposure process in order to reduce bias. Our research findings show the importance of understanding the nature of and dealing with bias in machine learning models such as recommender systems that interact directly with humans, and are thus causing an increasing influence on human discovery and decision making
Popularity Bias as Ethical and Technical Issue in Recommendation: A Survey
Recommender Systems have become omnipresent in our ev- eryday life, helping us making decisions and navigating in the digital world full of information. However, only recently researchers have started discovering undesired and harmful effects of automated recommendation and began questioning how fair and ethical these systems are, while in- fluencing our day-to-day decision making, shaping our online behaviour and tastes. In the latest research works, various biases and phenomena like filter bubbles and echo chambers have been uncovered among the resulting effects of recommender systems and rigorous work has started on solving these issues. In this narrative survey, we investigate the emer- gence and progression of research on one of the potential types of biases in recommender systems, i.e. Popularity Bias. Many recommender al- gorithms have been shown to favor already popular items, hence giving them even more exposure, which can harm fairness and diversity on the platforms using such systems. Such a problem becomes even more com- plicated if the object of recommendation is not just products and content, but people, their work and services. This survey describes the progress in this field of study, highlighting the advancements and identifying the gaps in the research, where additional effort and attention is necessary to minimize the harmful effect and make sure that such systems are build in a fair and ethical way
A Survey on Fairness-aware Recommender Systems
As information filtering services, recommender systems have extremely
enriched our daily life by providing personalized suggestions and facilitating
people in decision-making, which makes them vital and indispensable to human
society in the information era. However, as people become more dependent on
them, recent studies show that recommender systems potentially own
unintentional impacts on society and individuals because of their unfairness
(e.g., gender discrimination in job recommendations). To develop trustworthy
services, it is crucial to devise fairness-aware recommender systems that can
mitigate these bias issues. In this survey, we summarise existing methodologies
and practices of fairness in recommender systems. Firstly, we present concepts
of fairness in different recommendation scenarios, comprehensively categorize
current advances, and introduce typical methods to promote fairness in
different stages of recommender systems. Next, after introducing datasets and
evaluation metrics applied to assess the fairness of recommender systems, we
will delve into the significant influence that fairness-aware recommender
systems exert on real-world industrial applications. Subsequently, we highlight
the connection between fairness and other principles of trustworthy recommender
systems, aiming to consider trustworthiness principles holistically while
advocating for fairness. Finally, we summarize this review, spotlighting
promising opportunities in comprehending concepts, frameworks, the balance
between accuracy and fairness, and the ties with trustworthiness, with the
ultimate goal of fostering the development of fairness-aware recommender
systems.Comment: 27 pages, 9 figure
- …