129,083 research outputs found
Mixed Information Flow for Cross-domain Sequential Recommendations
Cross-domain sequential recommendation is the task of predict the next item
that the user is most likely to interact with based on past sequential behavior
from multiple domains. One of the key challenges in cross-domain sequential
recommendation is to grasp and transfer the flow of information from multiple
domains so as to promote recommendations in all domains. Previous studies have
investigated the flow of behavioral information by exploring the connection
between items from different domains. The flow of knowledge (i.e., the
connection between knowledge from different domains) has so far been neglected.
In this paper, we propose a mixed information flow network for cross-domain
sequential recommendation to consider both the flow of behavioral information
and the flow of knowledge by incorporating a behavior transfer unit and a
knowledge transfer unit. The proposed mixed information flow network is able to
decide when cross-domain information should be used and, if so, which
cross-domain information should be used to enrich the sequence representation
according to users' current preferences. Extensive experiments conducted on
four e-commerce datasets demonstrate that mixed information flow network is
able to further improve recommendation performance in different domains by
modeling mixed information flow.Comment: 26 pages, 6 figures, TKDD journal, 7 co-author
Review-Based Domain Disentanglement without Duplicate Users or Contexts for Cross-Domain Recommendation
A cross-domain recommendation has shown promising results in solving
data-sparsity and cold-start problems. Despite such progress, existing methods
focus on domain-shareable information (overlapped users or same contexts) for a
knowledge transfer, and they fail to generalize well without such requirements.
To deal with these problems, we suggest utilizing review texts that are general
to most e-commerce systems. Our model (named SER) uses three text analysis
modules, guided by a single domain discriminator for disentangled
representation learning. Here, we suggest a novel optimization strategy that
can enhance the quality of domain disentanglement, and also debilitates
detrimental information of a source domain. Also, we extend the encoding
network from a single to multiple domains, which has proven to be powerful for
review-based recommender systems. Extensive experiments and ablation studies
demonstrate that our method is efficient, robust, and scalable compared to the
state-of-the-art single and cross-domain recommendation methods
One Model for All: Large Language Models are Domain-Agnostic Recommendation Systems
The purpose of sequential recommendation is to utilize the interaction
history of a user and predict the next item that the user is most likely to
interact with. While data sparsity and cold start are two challenges that most
recommender systems are still facing, many efforts are devoted to utilizing
data from other domains, called cross-domain methods. However, general
cross-domain methods explore the relationship between two domains by designing
complex model architecture, making it difficult to scale to multiple domains
and utilize more data. Moreover, existing recommendation systems use IDs to
represent item, which carry less transferable signals in cross-domain
scenarios, and user cross-domain behaviors are also sparse, making it
challenging to learn item relationship from different domains. These problems
hinder the application of multi-domain methods to sequential recommendation.
Recently, large language models (LLMs) exhibit outstanding performance in world
knowledge learning from text corpora and general-purpose question answering.
Inspired by these successes, we propose a simple but effective framework for
domain-agnostic recommendation by exploiting the pre-trained LLMs (namely
LLM-Rec). We mix the user's behavior across different domains, and then
concatenate the title information of these items into a sentence and model the
user's behaviors with a pre-trained language model. We expect that by mixing
the user's behaviors across different domains, we can exploit the common
knowledge encoded in the pre-trained language model to alleviate the problems
of data sparsity and cold start problems. Furthermore, we are curious about
whether the latest technical advances in nature language processing (NLP) can
transfer to the recommendation scenarios.Comment: 10 pages, 7 figures, 6 table
Time Interval-enhanced Graph Neural Network for Shared-account Cross-domain Sequential Recommendation
Shared-account Cross-domain Sequential Recommendation (SCSR) task aims to
recommend the next item via leveraging the mixed user behaviors in multiple
domains. It is gaining immense research attention as more and more users tend
to sign up on different platforms and share accounts with others to access
domain-specific services. Existing works on SCSR mainly rely on mining
sequential patterns via Recurrent Neural Network (RNN)-based models, which
suffer from the following limitations: 1) RNN-based methods overwhelmingly
target discovering sequential dependencies in single-user behaviors. They are
not expressive enough to capture the relationships among multiple entities in
SCSR. 2) All existing methods bridge two domains via knowledge transfer in the
latent space, and ignore the explicit cross-domain graph structure. 3) None
existing studies consider the time interval information among items, which is
essential in the sequential recommendation for characterizing different items
and learning discriminative representations for them. In this work, we propose
a new graph-based solution, namely TiDA-GCN, to address the above challenges.
Specifically, we first link users and items in each domain as a graph. Then, we
devise a domain-aware graph convolution network to learn userspecific node
representations. To fully account for users' domainspecific preferences on
items, two effective attention mechanisms are further developed to selectively
guide the message passing process. Moreover, to further enhance item- and
account-level representation learning, we incorporate the time interval into
the message passing, and design an account-aware self-attention module for
learning items' interactive characteristics. Experiments demonstrate the
superiority of our proposed method from various aspects.Comment: 15 pages, 6 figure
UFIN: Universal Feature Interaction Network for Multi-Domain Click-Through Rate Prediction
Click-Through Rate (CTR) prediction, which aims to estimate the probability
of a user clicking on an item, is a key task in online advertising. Numerous
existing CTR models concentrate on modeling the feature interactions within a
solitary domain, thereby rendering them inadequate for fulfilling the
requisites of multi-domain recommendations in real industrial scenarios. Some
recent approaches propose intricate architectures to enhance knowledge sharing
and augment model training across multiple domains. However, these approaches
encounter difficulties when being transferred to new recommendation domains,
owing to their reliance on the modeling of ID features (e.g., item id). To
address the above issue, we propose the Universal Feature Interaction Network
(UFIN) approach for CTR prediction. UFIN exploits textual data to learn
universal feature interactions that can be effectively transferred across
diverse domains. For learning universal feature representations, we regard the
text and feature as two different modalities and propose an encoder-decoder
network founded on a Large Language Model (LLM) to enforce the transfer of data
from the text modality to the feature modality. Building upon the above
foundation, we further develop a mixtureof-experts (MoE) enhanced adaptive
feature interaction model to learn transferable collaborative patterns across
multiple domains. Furthermore, we propose a multi-domain knowledge distillation
framework to enhance feature interaction learning. Based on the above methods,
UFIN can effectively bridge the semantic gap to learn common knowledge across
various domains, surpassing the constraints of ID-based models. Extensive
experiments conducted on eight datasets show the effectiveness of UFIN, in both
multidomain and cross-platform settings. Our code is available at
https://github.com/RUCAIBox/UFIN
Transfer Meets Hybrid: A Synthetic Approach for Cross-Domain Collaborative Filtering with Text
Collaborative filtering (CF) is the key technique for recommender systems
(RSs). CF exploits user-item behavior interactions (e.g., clicks) only and
hence suffers from the data sparsity issue. One research thread is to integrate
auxiliary information such as product reviews and news titles, leading to
hybrid filtering methods. Another thread is to transfer knowledge from other
source domains such as improving the movie recommendation with the knowledge
from the book domain, leading to transfer learning methods. In real-world life,
no single service can satisfy a user's all information needs. Thus it motivates
us to exploit both auxiliary and source information for RSs in this paper. We
propose a novel neural model to smoothly enable Transfer Meeting Hybrid (TMH)
methods for cross-domain recommendation with unstructured text in an end-to-end
manner. TMH attentively extracts useful content from unstructured text via a
memory module and selectively transfers knowledge from a source domain via a
transfer network. On two real-world datasets, TMH shows better performance in
terms of three ranking metrics by comparing with various baselines. We conduct
thorough analyses to understand how the text content and transferred knowledge
help the proposed model.Comment: 11 pages, 7 figures, a full version for the WWW 2019 short pape
- …