3 research outputs found
How to Retrain Recommender System? A Sequential Meta-Learning Method
Practical recommender systems need be periodically retrained to refresh the
model with new interaction data. To pursue high model fidelity, it is usually
desirable to retrain the model on both historical and new data, since it can
account for both long-term and short-term user preference. However, a full
model retraining could be very time-consuming and memory-costly, especially
when the scale of historical data is large. In this work, we study the model
retraining mechanism for recommender systems, a topic of high practical values
but has been relatively little explored in the research community.
Our first belief is that retraining the model on historical data is
unnecessary, since the model has been trained on it before. Nevertheless,
normal training on new data only may easily cause overfitting and forgetting
issues, since the new data is of a smaller scale and contains fewer information
on long-term user preference. To address this dilemma, we propose a new
training method, aiming to abandon the historical data during retraining
through learning to transfer the past training experience. Specifically, we
design a neural network-based transfer component, which transforms the old
model to a new model that is tailored for future recommendations. To learn the
transfer component well, we optimize the "future performance" -- i.e., the
recommendation accuracy evaluated in the next time period. Our Sequential
Meta-Learning(SML) method offers a general training paradigm that is applicable
to any differentiable model. We demonstrate SML on matrix factorization and
conduct experiments on two real-world datasets. Empirical results show that SML
not only achieves significant speed-up, but also outperforms the full model
retraining in recommendation accuracy, validating the effectiveness of our
proposals. We release our codes at: https://github.com/zyang1580/SML.Comment: Appear in SIGIR 202
Critically Examining the Claimed Value of Convolutions over User-Item Embedding Maps for Recommender Systems
In recent years, algorithm research in the area of recommender systems has
shifted from matrix factorization techniques and their latent factor models to
neural approaches. However, given the proven power of latent factor models,
some newer neural approaches incorporate them within more complex network
architectures. One specific idea, recently put forward by several researchers,
is to consider potential correlations between the latent factors, i.e.,
embeddings, by applying convolutions over the user-item interaction map.
However, contrary to what is claimed in these articles, such interaction maps
do not share the properties of images where Convolutional Neural Networks
(CNNs) are particularly useful. In this work, we show through analytical
considerations and empirical evaluations that the claimed gains reported in the
literature cannot be attributed to the ability of CNNs to model embedding
correlations, as argued in the original papers. Moreover, additional
performance evaluations show that all of the examined recent CNN-based models
are outperformed by existing non-neural machine learning techniques or
traditional nearest-neighbor approaches. On a more general level, our work
points to major methodological issues in recommender systems research.Comment: Source code available here:
https://github.com/MaurizioFD/RecSys2019_DeepLearning_Evaluatio
Modeling Embedding Dimension Correlations via Convolutional Neural Collaborative Filtering
10.1145/3357154ACM Transactions on Information Systems37