39 research outputs found
Equality of Voice: Towards Fair Representation in Crowdsourced Top-K Recommendations
To help their users to discover important items at a particular time, major
websites like Twitter, Yelp, TripAdvisor or NYTimes provide Top-K
recommendations (e.g., 10 Trending Topics, Top 5 Hotels in Paris or 10 Most
Viewed News Stories), which rely on crowdsourced popularity signals to select
the items. However, different sections of a crowd may have different
preferences, and there is a large silent majority who do not explicitly express
their opinion. Also, the crowd often consists of actors like bots, spammers, or
people running orchestrated campaigns. Recommendation algorithms today largely
do not consider such nuances, hence are vulnerable to strategic manipulation by
small but hyper-active user groups.
To fairly aggregate the preferences of all users while recommending top-K
items, we borrow ideas from prior research on social choice theory, and
identify a voting mechanism called Single Transferable Vote (STV) as having
many of the fairness properties we desire in top-K item (s)elections. We
develop an innovative mechanism to attribute preferences of silent majority
which also make STV completely operational. We show the generalizability of our
approach by implementing it on two different real-world datasets. Through
extensive experimentation and comparison with state-of-the-art techniques, we
show that our proposed approach provides maximum user satisfaction, and cuts
down drastically on items disliked by most but hyper-actively promoted by a few
users.Comment: In the proceedings of the Conference on Fairness, Accountability, and
Transparency (FAT* '19). Please cite the conference versio
How to Retrain Recommender System? A Sequential Meta-Learning Method
Practical recommender systems need be periodically retrained to refresh the
model with new interaction data. To pursue high model fidelity, it is usually
desirable to retrain the model on both historical and new data, since it can
account for both long-term and short-term user preference. However, a full
model retraining could be very time-consuming and memory-costly, especially
when the scale of historical data is large. In this work, we study the model
retraining mechanism for recommender systems, a topic of high practical values
but has been relatively little explored in the research community.
Our first belief is that retraining the model on historical data is
unnecessary, since the model has been trained on it before. Nevertheless,
normal training on new data only may easily cause overfitting and forgetting
issues, since the new data is of a smaller scale and contains fewer information
on long-term user preference. To address this dilemma, we propose a new
training method, aiming to abandon the historical data during retraining
through learning to transfer the past training experience. Specifically, we
design a neural network-based transfer component, which transforms the old
model to a new model that is tailored for future recommendations. To learn the
transfer component well, we optimize the "future performance" -- i.e., the
recommendation accuracy evaluated in the next time period. Our Sequential
Meta-Learning(SML) method offers a general training paradigm that is applicable
to any differentiable model. We demonstrate SML on matrix factorization and
conduct experiments on two real-world datasets. Empirical results show that SML
not only achieves significant speed-up, but also outperforms the full model
retraining in recommendation accuracy, validating the effectiveness of our
proposals. We release our codes at: https://github.com/zyang1580/SML.Comment: Appear in SIGIR 202
Efficient-FedRec: Efficient Federated Learning Framework for Privacy-Preserving News Recommendation
News recommendation is critical for personalized news access. Most existing
news recommendation methods rely on centralized storage of users' historical
news click behavior data, which may lead to privacy concerns and hazards.
Federated Learning is a privacy-preserving framework for multiple clients to
collaboratively train models without sharing their private data. However, the
computation and communication cost of directly learning many existing news
recommendation models in a federated way are unacceptable for user clients. In
this paper, we propose an efficient federated learning framework for
privacy-preserving news recommendation. Instead of training and communicating
the whole model, we decompose the news recommendation model into a large news
model maintained in the server and a light-weight user model shared on both
server and clients, where news representations and user model are communicated
between server and clients. More specifically, the clients request the user
model and news representations from the server, and send their locally computed
gradients to the server for aggregation. The server updates its global user
model with the aggregated gradients, and further updates its news model to
infer updated news representations. Since the local gradients may contain
private information, we propose a secure aggregation method to aggregate
gradients in a privacy-preserving way. Experiments on two real-world datasets
show that our method can reduce the computation and communication cost on
clients while keep promising model performance
Going Beyond Local: Global Graph-Enhanced Personalized News Recommendations
Precisely recommending candidate news articles to users has always been a
core challenge for personalized news recommendation systems. Most recent works
primarily focus on using advanced natural language processing techniques to
extract semantic information from rich textual data, employing content-based
methods derived from local historical news. However, this approach lacks a
global perspective, failing to account for users' hidden motivations and
behaviors beyond semantic information. To address this challenge, we propose a
novel model called GLORY (Global-LOcal news Recommendation sYstem), which
combines global representations learned from other users with local
representations to enhance personalized recommendation systems. We accomplish
this by constructing a Global-aware Historical News Encoder, which includes a
global news graph and employs gated graph neural networks to enrich news
representations, thereby fusing historical news representations by a historical
news aggregator. Similarly, we extend this approach to a Global Candidate News
Encoder, utilizing a global entity graph and a candidate news aggregator to
enhance candidate news representation. Evaluation results on two public news
datasets demonstrate that our method outperforms existing approaches.
Furthermore, our model offers more diverse recommendations.Comment: 10 pages, Recsys 202