9 research outputs found
Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain Recommendation
The rapid proliferation of new users and items on the social web has
aggravated the gray-sheep user/long-tail item challenge in recommender systems.
Historically, cross-domain co-clustering methods have successfully leveraged
shared users and items across dense and sparse domains to improve inference
quality. However, they rely on shared rating data and cannot scale to multiple
sparse target domains (i.e., the one-to-many transfer setting). This, combined
with the increasing adoption of neural recommender architectures, motivates us
to develop scalable neural layer-transfer approaches for cross-domain learning.
Our key intuition is to guide neural collaborative filtering with
domain-invariant components shared across the dense and sparse domains,
improving the user and item representations learned in the sparse domains. We
leverage contextual invariances across domains to develop these shared modules,
and demonstrate that with user-item interaction context, we can learn-to-learn
informative representation spaces even with sparse interaction data. We show
the effectiveness and scalability of our approach on two public datasets and a
massive transaction dataset from Visa, a global payments technology company
(19% Item Recall, 3x faster vs. training separate models for each domain). Our
approach is applicable to both implicit and explicit feedback settings.Comment: SIGIR 202
Review-Based Domain Disentanglement without Duplicate Users or Contexts for Cross-Domain Recommendation
A cross-domain recommendation has shown promising results in solving
data-sparsity and cold-start problems. Despite such progress, existing methods
focus on domain-shareable information (overlapped users or same contexts) for a
knowledge transfer, and they fail to generalize well without such requirements.
To deal with these problems, we suggest utilizing review texts that are general
to most e-commerce systems. Our model (named SER) uses three text analysis
modules, guided by a single domain discriminator for disentangled
representation learning. Here, we suggest a novel optimization strategy that
can enhance the quality of domain disentanglement, and also debilitates
detrimental information of a source domain. Also, we extend the encoding
network from a single to multiple domains, which has proven to be powerful for
review-based recommender systems. Extensive experiments and ablation studies
demonstrate that our method is efficient, robust, and scalable compared to the
state-of-the-art single and cross-domain recommendation methods
One for All, All for One: Learning and Transferring User Embeddings for Cross-Domain Recommendation
Cross-domain recommendation is an important method to improve recommender
system performance, especially when observations in target domains are sparse.
However, most existing techniques focus on single-target or dual-target
cross-domain recommendation (CDR) and are hard to be generalized to CDR with
multiple target domains. In addition, the negative transfer problem is
prevalent in CDR, where the recommendation performance in a target domain may
not always be enhanced by knowledge learned from a source domain, especially
when the source domain has sparse data. In this study, we propose CAT-ART, a
multi-target CDR method that learns to improve recommendations in all
participating domains through representation learning and embedding transfer.
Our method consists of two parts: a self-supervised Contrastive AuToencoder
(CAT) framework to generate global user embeddings based on information from
all participating domains, and an Attention-based Representation Transfer (ART)
framework which transfers domain-specific user embeddings from other domains to
assist with target domain recommendation. CAT-ART boosts the recommendation
performance in any target domain through the combined use of the learned global
user representation and knowledge transferred from other domains, in addition
to the original user embedding in the target domain. We conducted extensive
experiments on a collected real-world CDR dataset spanning 5 domains and
involving a million users. Experimental results demonstrate the superiority of
the proposed method over a range of prior arts. We further conducted ablation
studies to verify the effectiveness of the proposed components. Our collected
dataset will be open-sourced to facilitate future research in the field of
multi-domain recommender systems and user modeling.Comment: 9 pages, accepted by WSDM 202
PEACE: Prototype lEarning Augmented transferable framework for Cross-domain rEcommendation
To help merchants/customers to provide/access a variety of services through
miniapps, online service platforms have occupied a critical position in the
effective content delivery, in which how to recommend items in the new domain
launched by the service provider for customers has become more urgent. However,
the non-negligible gap between the source and diversified target domains poses
a considerable challenge to cross-domain recommendation systems, which often
leads to performance bottlenecks in industrial settings. While entity graphs
have the potential to serve as a bridge between domains, rudimentary
utilization still fail to distill useful knowledge and even induce the negative
transfer issue. To this end, we propose PEACE, a Prototype lEarning Augmented
transferable framework for Cross-domain rEcommendation. For domain gap
bridging, PEACE is built upon a multi-interest and entity-oriented pre-training
architecture which could not only benefit the learning of generalized knowledge
in a multi-granularity manner, but also help leverage more structural
information in the entity graph. Then, we bring the prototype learning into the
pre-training over source domains, so that representations of users and items
are greatly improved by the contrastive prototype learning module and the
prototype enhanced attention mechanism for adaptive knowledge utilization. To
ease the pressure of online serving, PEACE is carefully deployed in a
lightweight manner, and significant performance improvements are observed in
both online and offline environments.Comment: Accepted by WSDM 202
Automated Prompting for Non-overlapping Cross-domain Sequential Recommendation
Cross-domain Recommendation (CR) has been extensively studied in recent years
to alleviate the data sparsity issue in recommender systems by utilizing
different domain information. In this work, we focus on the more general
Non-overlapping Cross-domain Sequential Recommendation (NCSR) scenario. NCSR is
challenging because there are no overlapped entities (e.g., users and items)
between domains, and there is only users' implicit feedback and no content
information. Previous CR methods cannot solve NCSR well, since (1) they either
need extra content to align domains or need explicit domain alignment
constraints to reduce the domain discrepancy from domain-invariant features,
(2) they pay more attention to users' explicit feedback (i.e., users' rating
data) and cannot well capture their sequential interaction patterns, (3) they
usually do a single-target cross-domain recommendation task and seldom
investigate the dual-target ones. Considering the above challenges, we propose
Prompt Learning-based Cross-domain Recommender (PLCR), an automated
prompting-based recommendation framework for the NCSR task. Specifically, to
address the challenge (1), PLCR resorts to learning domain-invariant and
domain-specific representations via its prompt learning component, where the
domain alignment constraint is discarded. For challenges (2) and (3), PLCR
introduces a pre-trained sequence encoder to learn users' sequential
interaction patterns, and conducts a dual-learning target with a separation
constraint to enhance recommendations in both domains. Our empirical study on
two sub-collections of Amazon demonstrates the advance of PLCR compared with
some related SOTA methods
CMML: Contextual Modulation Meta Learning for Cold-Start Recommendation
Practical recommender systems experience a cold-start problem when observed user-item interactions in the history are insufficient. Meta learning, especially gradient based one, can be adopted to tackle this problem by learning initial parameters of the model and thus allowing fast adaptation to a specific task from limited data examples. Though with significant performance improvement, it commonly suffers from two critical issues: the non-compatibility with mainstream industrial deployment and the heavy computational burdens, both due to the inner-loop gradient operation. These two issues make them hard to be applied in practical recommender systems. To enjoy the benefits of meta learning framework and mitigate these problems, we propose a recommendation framework called Contextual Modulation Meta Learning (CMML). CMML is composed of fully feed-forward operations so it is computationally efficient and completely compatible with the mainstream industrial deployment. CMML consists of three components, including a context encoder that can generate context embedding to represent a specific task, a hybrid context generator that aggregates specific user-item features with task-level context, and a contextual modulation network, which can modulate the recommendation model to adapt effectively. We validate our approach on both scenario-specific and user-specific cold-start setting on various real-world datasets, showing CMML can achieve comparable or even better performance with gradient based methods yet with higher computational efficiency and better interpretability