46,230 research outputs found
Understanding the Influence of Data Characteristics on the Performance of Point-of-Interest Recommendation Algorithms
The performance of recommendation algorithms is closely tied to key
characteristics of the data sets they use, such as sparsity, popularity bias,
and preference distributions. In this paper, we conduct a comprehensive
explanatory analysis to shed light on the impact of a broad range of data
characteristics within the point-of-interest (POI) recommendation domain. To
accomplish this, we extend prior methodologies used to characterize traditional
recommendation problems by introducing new explanatory variables specifically
relevant to POI recommendation. We subdivide a POI recommendation data set on
New York City into domain-driven subsamples to measure the effect of varying
these characteristics on different state-of-the-art POI recommendation
algorithms in terms of accuracy, novelty, and item exposure. Our findings,
obtained through the application of an explanatory framework employing
multiple-regression models, reveal that the relevant independent variables
encompass all categories of data characteristics and account for as much as
85-90\% of the accuracy and item exposure achieved by the algorithms.
Our study reaffirms the pivotal role of prominent data characteristics, such as
density, popularity bias, and the distribution of check-ins in POI
recommendation. Additionally, we unveil novel factors, such as the proximity of
user activity to the city center and the duration of user activity. In summary,
our work reveals why certain POI recommendation algorithms excel in specific
recommendation problems and, conversely, offers practical insights into which
data characteristics should be modified (or explicitly recognized) to achieve
better performance
Using Stable Matching to Optimize the Balance between Accuracy and Diversity in Recommendation
Increasing aggregate diversity (or catalog coverage) is an important
system-level objective in many recommendation domains where it may be desirable
to mitigate the popularity bias and to improve the coverage of long-tail items
in recommendations given to users. This is especially important in
multistakeholder recommendation scenarios where it may be important to optimize
utilities not just for the end user, but also for other stakeholders such as
item sellers or producers who desire a fair representation of their items
across recommendation lists produced by the system. Unfortunately, attempts to
increase aggregate diversity often result in lower recommendation accuracy for
end users. Thus, addressing this problem requires an approach that can
effectively manage the trade-offs between accuracy and aggregate diversity. In
this work, we propose a two-sided post-processing approach in which both user
and item utilities are considered. Our goal is to maximize aggregate diversity
while minimizing loss in recommendation accuracy. Our solution is a
generalization of the Deferred Acceptance algorithm which was proposed as an
efficient algorithm to solve the well-known stable matching problem. We prove
that our algorithm results in a unique user-optimal stable match between items
and users. Using three recommendation datasets, we empirically demonstrate the
effectiveness of our approach in comparison to several baselines. In
particular, our results show that the proposed solution is quite effective in
increasing aggregate diversity and item-side utility while optimizing
recommendation accuracy for end users
Item-based Variational Auto-encoder for Fair Music Recommendation
We present our solution for the EvalRS DataChallenge. The EvalRS
DataChallenge aims to build a more realistic recommender system considering
accuracy, fairness, and diversity in evaluation. Our proposed system is based
on an ensemble between an item-based variational auto-encoder (VAE) and a
Bayesian personalized ranking matrix factorization (BPRMF). To mitigate the
bias in popularity, we use an item-based VAE for each popularity group with an
additional fairness regularization. To make a reasonable recommendation even
the predictions are inaccurate, we combine the recommended list of BPRMF and
that of item-based VAE. Through the experiments, we demonstrate that the
item-based VAE with fairness regularization significantly reduces popularity
bias compared to the user-based VAE. The ensemble between the item-based VAE
and BPRMF makes the top-1 item similar to the ground truth even the predictions
are inaccurate. Finally, we propose a `Coefficient Variance based Fairness' as
a novel evaluation metric based on our reflections from the extensive
experiments.Comment: 6pages, CIKM 2022 Data challeng
Modeling mutual feedback between users and recommender systems
Recommender systems daily influence our decisions on the Internet. While considerable attention has been given to issues such as recommendation accuracy and user privacy, the long-term mutual feedback between a recommender system and the decisions of its users has been neglected so far. We propose here a model of network evolution which allows us to study the complex dynamics induced by this feedback, including the hysteresis effect which is typical for systems with non-linear dynamics. Despite the popular belief that recommendation helps users to discover new things, we find that the long-term use of recommendation can contribute to the rise of extremely popular items and thus ultimately narrow the user choice. These results are supported by measurements of the time evolution of item popularity inequality in real systems. We show that this adverse effect of recommendation can be tamed by sacrificing part of short-term recommendation accuracy
News Session-Based Recommendations using Deep Neural Networks
News recommender systems are aimed to personalize users experiences and help
them to discover relevant articles from a large and dynamic search space.
Therefore, news domain is a challenging scenario for recommendations, due to
its sparse user profiling, fast growing number of items, accelerated item's
value decay, and users preferences dynamic shift. Some promising results have
been recently achieved by the usage of Deep Learning techniques on Recommender
Systems, specially for item's feature extraction and for session-based
recommendations with Recurrent Neural Networks. In this paper, it is proposed
an instantiation of the CHAMELEON -- a Deep Learning Meta-Architecture for News
Recommender Systems. This architecture is composed of two modules, the first
responsible to learn news articles representations, based on their text and
metadata, and the second module aimed to provide session-based recommendations
using Recurrent Neural Networks. The recommendation task addressed in this work
is next-item prediction for users sessions: "what is the next most likely
article a user might read in a session?" Users sessions context is leveraged by
the architecture to provide additional information in such extreme cold-start
scenario of news recommendation. Users' behavior and item features are both
merged in an hybrid recommendation approach. A temporal offline evaluation
method is also proposed as a complementary contribution, for a more realistic
evaluation of such task, considering dynamic factors that affect global
readership interests like popularity, recency, and seasonality. Experiments
with an extensive number of session-based recommendation methods were performed
and the proposed instantiation of CHAMELEON meta-architecture obtained a
significant relative improvement in top-n accuracy and ranking metrics (10% on
Hit Rate and 13% on MRR) over the best benchmark methods.Comment: Accepted for the Third Workshop on Deep Learning for Recommender
Systems - DLRS 2018, October 02-07, 2018, Vancouver, Canada.
https://recsys.acm.org/recsys18/dlrs
Time-Sensitive Collaborative Filtering Algorithm with Feature Stability
In the recommendation system, the collaborative filtering algorithm is widely used. However, there are lots of problems which need to be solved in recommendation field, such as low precision, the long tail of items. In this paper, we design an algorithm called FSTS for solving the low precision and the long tail. We adopt stability variables and time-sensitive factors to solve the problem of user's interest drift, and improve the accuracy of prediction. Experiments show that, compared with Item-CF, the precision, the recall, the coverage and the popularity have been significantly improved by FSTS algorithm. At the same time, it can mine long tail items and alleviate the phenomenon of the long tail
A probabilistic model to resolve diversity-accuracy challenge of recommendation systems
Recommendation systems have wide-spread applications in both academia and
industry. Traditionally, performance of recommendation systems has been
measured by their precision. By introducing novelty and diversity as key
qualities in recommender systems, recently increasing attention has been
focused on this topic. Precision and novelty of recommendation are not in the
same direction, and practical systems should make a trade-off between these two
quantities. Thus, it is an important feature of a recommender system to make it
possible to adjust diversity and accuracy of the recommendations by tuning the
model. In this paper, we introduce a probabilistic structure to resolve the
diversity-accuracy dilemma in recommender systems. We propose a hybrid model
with adjustable level of diversity and precision such that one can perform this
by tuning a single parameter. The proposed recommendation model consists of two
models: one for maximization of the accuracy and the other one for
specification of the recommendation list to tastes of users. Our experiments on
two real datasets show the functionality of the model in resolving
accuracy-diversity dilemma and outperformance of the model over other classic
models. The proposed method could be extensively applied to real commercial
systems due to its low computational complexity and significant performance.Comment: 19 pages, 5 figure
- …