5,373 research outputs found
Deep Learning based Recommender System: A Survey and New Perspectives
With the ever-growing volume of online information, recommender systems have
been an effective strategy to overcome such information overload. The utility
of recommender systems cannot be overstated, given its widespread adoption in
many web applications, along with its potential impact to ameliorate many
problems related to over-choice. In recent years, deep learning has garnered
considerable interest in many research fields such as computer vision and
natural language processing, owing not only to stellar performance but also the
attractive property of learning feature representations from scratch. The
influence of deep learning is also pervasive, recently demonstrating its
effectiveness when applied to information retrieval and recommender systems
research. Evidently, the field of deep learning in recommender system is
flourishing. This article aims to provide a comprehensive review of recent
research efforts on deep learning based recommender systems. More concretely,
we provide and devise a taxonomy of deep learning based recommendation models,
along with providing a comprehensive summary of the state-of-the-art. Finally,
we expand on current trends and provide new perspectives pertaining to this new
exciting development of the field.Comment: The paper has been accepted by ACM Computing Surveys.
https://doi.acm.org/10.1145/328502
Hierarchical Attention Network for Visually-aware Food Recommendation
Food recommender systems play an important role in assisting users to
identify the desired food to eat. Deciding what food to eat is a complex and
multi-faceted process, which is influenced by many factors such as the
ingredients, appearance of the recipe, the user's personal preference on food,
and various contexts like what had been eaten in the past meals. In this work,
we formulate the food recommendation problem as predicting user preference on
recipes based on three key factors that determine a user's choice on food,
namely, 1) the user's (and other users') history; 2) the ingredients of a
recipe; and 3) the descriptive image of a recipe. To address this challenging
problem, we develop a dedicated neural network based solution Hierarchical
Attention based Food Recommendation (HAFR) which is capable of: 1) capturing
the collaborative filtering effect like what similar users tend to eat; 2)
inferring a user's preference at the ingredient level; and 3) learning user
preference from the recipe's visual images. To evaluate our proposed method, we
construct a large-scale dataset consisting of millions of ratings from
AllRecipes.com. Extensive experiments show that our method outperforms several
competing recommender solutions like Factorization Machine and Visual Bayesian
Personalized Ranking with an average improvement of 12%, offering promising
results in predicting user preference for food. Codes and dataset will be
released upon acceptance
Adversarial Training Towards Robust Multimedia Recommender System
With the prevalence of multimedia content on the Web, developing recommender
solutions that can effectively leverage the rich signal in multimedia data is
in urgent need. Owing to the success of deep neural networks in representation
learning, recent advance on multimedia recommendation has largely focused on
exploring deep learning methods to improve the recommendation accuracy. To
date, however, there has been little effort to investigate the robustness of
multimedia representation and its impact on the performance of multimedia
recommendation.
In this paper, we shed light on the robustness of multimedia recommender
system. Using the state-of-the-art recommendation framework and deep image
features, we demonstrate that the overall system is not robust, such that a
small (but purposeful) perturbation on the input image will severely decrease
the recommendation accuracy. This implies the possible weakness of multimedia
recommender system in predicting user preference, and more importantly, the
potential of improvement by enhancing its robustness. To this end, we propose a
novel solution named Adversarial Multimedia Recommendation (AMR), which can
lead to a more robust multimedia recommender model by using adversarial
learning. The idea is to train the model to defend an adversary, which adds
perturbations to the target image with the purpose of decreasing the model's
accuracy. We conduct experiments on two representative multimedia
recommendation tasks, namely, image recommendation and visually-aware product
recommendation. Extensive results verify the positive effect of adversarial
learning and demonstrate the effectiveness of our AMR method. Source codes are
available in https://github.com/duxy-me/AMR.Comment: TKD
One for All, All for One: Learning and Transferring User Embeddings for Cross-Domain Recommendation
Cross-domain recommendation is an important method to improve recommender
system performance, especially when observations in target domains are sparse.
However, most existing techniques focus on single-target or dual-target
cross-domain recommendation (CDR) and are hard to be generalized to CDR with
multiple target domains. In addition, the negative transfer problem is
prevalent in CDR, where the recommendation performance in a target domain may
not always be enhanced by knowledge learned from a source domain, especially
when the source domain has sparse data. In this study, we propose CAT-ART, a
multi-target CDR method that learns to improve recommendations in all
participating domains through representation learning and embedding transfer.
Our method consists of two parts: a self-supervised Contrastive AuToencoder
(CAT) framework to generate global user embeddings based on information from
all participating domains, and an Attention-based Representation Transfer (ART)
framework which transfers domain-specific user embeddings from other domains to
assist with target domain recommendation. CAT-ART boosts the recommendation
performance in any target domain through the combined use of the learned global
user representation and knowledge transferred from other domains, in addition
to the original user embedding in the target domain. We conducted extensive
experiments on a collected real-world CDR dataset spanning 5 domains and
involving a million users. Experimental results demonstrate the superiority of
the proposed method over a range of prior arts. We further conducted ablation
studies to verify the effectiveness of the proposed components. Our collected
dataset will be open-sourced to facilitate future research in the field of
multi-domain recommender systems and user modeling.Comment: 9 pages, accepted by WSDM 202
A Survey on Cross-domain Recommendation: Taxonomies, Methods, and Future Directions
Traditional recommendation systems are faced with two long-standing
obstacles, namely, data sparsity and cold-start problems, which promote the
emergence and development of Cross-Domain Recommendation (CDR). The core idea
of CDR is to leverage information collected from other domains to alleviate the
two problems in one domain. Over the last decade, many efforts have been
engaged for cross-domain recommendation. Recently, with the development of deep
learning and neural networks, a large number of methods have emerged. However,
there is a limited number of systematic surveys on CDR, especially regarding
the latest proposed methods as well as the recommendation scenarios and
recommendation tasks they address. In this survey paper, we first proposed a
two-level taxonomy of cross-domain recommendation which classifies different
recommendation scenarios and recommendation tasks. We then introduce and
summarize existing cross-domain recommendation approaches under different
recommendation scenarios in a structured manner. We also organize datasets
commonly used. We conclude this survey by providing several potential research
directions about this field
TransRec: Learning Transferable Recommendation from Mixture-of-Modality Feedback
Learning large-scale pre-trained models on broad-ranging data and then
transfer to a wide range of target tasks has become the de facto paradigm in
many machine learning (ML) communities. Such big models are not only strong
performers in practice but also offer a promising way to break out of the
task-specific modeling restrictions, thereby enabling task-agnostic and unified
ML systems. However, such a popular paradigm is mainly unexplored by the
recommender systems (RS) community. A critical issue is that standard
recommendation models are primarily built on categorical identity features.
That is, the users and the interacted items are represented by their unique
IDs, which are generally not shareable across different systems or platforms.
To pursue the transferable recommendations, we propose studying pre-trained RS
models in a novel scenario where a user's interaction feedback involves a
mixture-of-modality (MoM) items, e.g., text and images. We then present
TransRec, a very simple modification made on the popular ID-based RS framework.
TransRec learns directly from the raw features of the MoM items in an
end-to-end training manner and thus enables effective transfer learning under
various scenarios without relying on overlapped users or items. We empirically
study the transferring ability of TransRec across four different real-world
recommendation settings. Besides, we look at its effects by scaling source and
target data size. Our results suggest that learning neural recommendation
models from MoM feedback provides a promising way to realize universal RS
- …