321 research outputs found
Adversarial Training Towards Robust Multimedia Recommender System
With the prevalence of multimedia content on the Web, developing recommender
solutions that can effectively leverage the rich signal in multimedia data is
in urgent need. Owing to the success of deep neural networks in representation
learning, recent advance on multimedia recommendation has largely focused on
exploring deep learning methods to improve the recommendation accuracy. To
date, however, there has been little effort to investigate the robustness of
multimedia representation and its impact on the performance of multimedia
recommendation.
In this paper, we shed light on the robustness of multimedia recommender
system. Using the state-of-the-art recommendation framework and deep image
features, we demonstrate that the overall system is not robust, such that a
small (but purposeful) perturbation on the input image will severely decrease
the recommendation accuracy. This implies the possible weakness of multimedia
recommender system in predicting user preference, and more importantly, the
potential of improvement by enhancing its robustness. To this end, we propose a
novel solution named Adversarial Multimedia Recommendation (AMR), which can
lead to a more robust multimedia recommender model by using adversarial
learning. The idea is to train the model to defend an adversary, which adds
perturbations to the target image with the purpose of decreasing the model's
accuracy. We conduct experiments on two representative multimedia
recommendation tasks, namely, image recommendation and visually-aware product
recommendation. Extensive results verify the positive effect of adversarial
learning and demonstrate the effectiveness of our AMR method. Source codes are
available in https://github.com/duxy-me/AMR.Comment: TKD
MultiCBR: Multi-view Contrastive Learning for Bundle Recommendation
Bundle recommendation seeks to recommend a bundle of related items to users
to improve both user experience and the profits of platform. Existing bundle
recommendation models have progressed from capturing only user-bundle
interactions to the modeling of multiple relations among users, bundles and
items. CrossCBR, in particular, incorporates cross-view contrastive learning
into a two-view preference learning framework, significantly improving SOTA
performance. It does, however, have two limitations: 1) the two-view
formulation does not fully exploit all the heterogeneous relations among users,
bundles and items; and 2) the "early contrast and late fusion" framework is
less effective in capturing user preference and difficult to generalize to
multiple views. In this paper, we present MultiCBR, a novel Multi-view
Contrastive learning framework for Bundle Recommendation. First, we devise a
multi-view representation learning framework capable of capturing all the
user-bundle, user-item and bundle-item relations, especially better utilizing
the bundle-item affiliations to enhance sparse bundles' representations.
Second, we innovatively adopt an "early fusion and late contrast" design that
first fuses the multi-view representations before performing self-supervised
contrastive learning. In comparison to existing approaches, our framework
reverses the order of fusion and contrast, introducing the following
advantages: 1)our framework is capable of modeling both cross-view and ego-view
preferences, allowing us to achieve enhanced user preference modeling; and 2)
instead of requiring quadratic number of cross-view contrastive losses, we only
require two self-supervised contrastive losses, resulting in minimal extra
costs. Experimental results on three public datasets indicate that our method
outperforms SOTA methods
Group Identification via Transitional Hypergraph Convolution with Cross-view Self-supervised Learning
With the proliferation of social media, a growing number of users search for
and join group activities in their daily life. This develops a need for the
study on the group identification (GI) task, i.e., recommending groups to
users. The major challenge in this task is how to predict users' preferences
for groups based on not only previous group participation of users but also
users' interests in items. Although recent developments in Graph Neural
Networks (GNNs) accomplish embedding multiple types of objects in graph-based
recommender systems, they, however, fail to address this GI problem
comprehensively. In this paper, we propose a novel framework named Group
Identification via Transitional Hypergraph Convolution with Graph
Self-supervised Learning (GTGS). We devise a novel transitional hypergraph
convolution layer to leverage users' preferences for items as prior knowledge
when seeking their group preferences. To construct comprehensive user/group
representations for GI task, we design the cross-view self-supervised learning
to encourage the intrinsic consistency between item and group preferences for
each user, and the group-based regularization to enhance the distinction among
group embeddings. Experimental results on three benchmark datasets verify the
superiority of GTGS. Additional detailed investigations are conducted to
demonstrate the effectiveness of the proposed framework.Comment: 11 pages. Accepted by CIKM'2
Enhancing Item-level Bundle Representation for Bundle Recommendation
Bundle recommendation approaches offer users a set of related items on a
particular topic. The current state-of-the-art (SOTA) method utilizes
contrastive learning to learn representations at both the bundle and item
levels. However, due to the inherent difference between the bundle-level and
item-level preferences, the item-level representations may not receive
sufficient information from the bundle affiliations to make accurate
predictions. In this paper, we propose a novel approach EBRec, short of
Enhanced Bundle Recommendation, which incorporates two enhanced modules to
explore inherent item-level bundle representations. First, we propose to
incorporate the bundle-user-item (B-U-I) high-order correlations to explore
more collaborative information, thus to enhance the previous bundle
representation that solely relies on the bundle-item affiliation information.
Second, we further enhance the B-U-I correlations by augmenting the observed
user-item interactions with interactions generated from pre-trained models,
thus improving the item-level bundle representations. We conduct extensive
experiments on three public datasets, and the results justify the effectiveness
of our approach as well as the two core modules. Codes and datasets are
available at https://github.com/answermycode/EBRec
- …