2,080 research outputs found
Adaptive Graph Contrastive Learning for Recommendation
Graph neural networks (GNNs) have recently emerged as an effective
collaborative filtering (CF) approaches for recommender systems. The key idea
of GNN-based recommender systems is to recursively perform message passing
along user-item interaction edges to refine encoded embeddings, relying on
sufficient and high-quality training data. However, user behavior data in
practical recommendation scenarios is often noisy and exhibits skewed
distribution. To address these issues, some recommendation approaches, such as
SGL, leverage self-supervised learning to improve user representations. These
approaches conduct self-supervised learning through creating contrastive views,
but they depend on the tedious trial-and-error selection of augmentation
methods. In this paper, we propose a novel Adaptive Graph Contrastive Learning
(AdaGCL) framework that conducts data augmentation with two adaptive
contrastive view generators to better empower the CF paradigm. Specifically, we
use two trainable view generators - a graph generative model and a graph
denoising model - to create adaptive contrastive views. With two adaptive
contrastive views, AdaGCL introduces additional high-quality training signals
into the CF paradigm, helping to alleviate data sparsity and noise issues.
Extensive experiments on three real-world datasets demonstrate the superiority
of our model over various state-of-the-art recommendation methods. Our model
implementation codes are available at the link https://github.com/HKUDS/AdaGCL
Knowledge Graph semantic enhancement of input data for improving AI
Intelligent systems designed using machine learning algorithms require a
large number of labeled data. Background knowledge provides complementary, real
world factual information that can augment the limited labeled data to train a
machine learning algorithm. The term Knowledge Graph (KG) is in vogue as for
many practical applications, it is convenient and useful to organize this
background knowledge in the form of a graph. Recent academic research and
implemented industrial intelligent systems have shown promising performance for
machine learning algorithms that combine training data with a knowledge graph.
In this article, we discuss the use of relevant KGs to enhance input data for
two applications that use machine learning -- recommendation and community
detection. The KG improves both accuracy and explainability
SSLRec: A Self-Supervised Learning Framework for Recommendation
Self-supervised learning (SSL) has gained significant interest in recent
years as a solution to address the challenges posed by sparse and noisy data in
recommender systems. Despite the growing number of SSL algorithms designed to
provide state-of-the-art performance in various recommendation scenarios (e.g.,
graph collaborative filtering, sequential recommendation, social
recommendation, KG-enhanced recommendation), there is still a lack of unified
frameworks that integrate recommendation algorithms across different domains.
Such a framework could serve as the cornerstone for self-supervised
recommendation algorithms, unifying the validation of existing methods and
driving the design of new ones. To address this gap, we introduce SSLRec, a
novel benchmark platform that provides a standardized, flexible, and
comprehensive framework for evaluating various SSL-enhanced recommenders. The
SSLRec framework features a modular architecture that allows users to easily
evaluate state-of-the-art models and a complete set of data augmentation and
self-supervised toolkits to help create SSL recommendation models with specific
needs. Furthermore, SSLRec simplifies the process of training and evaluating
different recommendation models with consistent and fair settings. Our SSLRec
platform covers a comprehensive set of state-of-the-art SSL-enhanced
recommendation models across different scenarios, enabling researchers to
evaluate these cutting-edge models and drive further innovation in the field.
Our implemented SSLRec framework is available at the source code repository
https://github.com/HKUDS/SSLRec.Comment: Published as a WSDM'24 full paper (oral presentation
Multi-Modal Self-Supervised Learning for Recommendation
The online emergence of multi-modal sharing platforms (eg, TikTok, Youtube)
is powering personalized recommender systems to incorporate various modalities
(eg, visual, textual and acoustic) into the latent user representations. While
existing works on multi-modal recommendation exploit multimedia content
features in enhancing item embeddings, their model representation capability is
limited by heavy label reliance and weak robustness on sparse user behavior
data. Inspired by the recent progress of self-supervised learning in
alleviating label scarcity issue, we explore deriving self-supervision signals
with effectively learning of modality-aware user preference and cross-modal
dependencies. To this end, we propose a new Multi-Modal Self-Supervised
Learning (MMSSL) method which tackles two key challenges. Specifically, to
characterize the inter-dependency between the user-item collaborative view and
item multi-modal semantic view, we design a modality-aware interactive
structure learning paradigm via adversarial perturbations for data
augmentation. In addition, to capture the effects that user's modality-aware
interaction pattern would interweave with each other, a cross-modal contrastive
learning approach is introduced to jointly preserve the inter-modal semantic
commonality and user preference diversity. Experiments on real-world datasets
verify the superiority of our method in offering great potential for multimedia
recommendation over various state-of-the-art baselines. The implementation is
released at: https://github.com/HKUDS/MMSSL.Comment: This paper has been published as a full paper at WWW 202
Representation Learning with Large Language Models for Recommendation
Recommender systems have seen significant advancements with the influence of
deep learning and graph neural networks, particularly in capturing complex
user-item relationships. However, these graph-based recommenders heavily depend
on ID-based data, potentially disregarding valuable textual information
associated with users and items, resulting in less informative learned
representations. Moreover, the utilization of implicit feedback data introduces
potential noise and bias, posing challenges for the effectiveness of user
preference learning. While the integration of large language models (LLMs) into
traditional ID-based recommenders has gained attention, challenges such as
scalability issues, limitations in text-only reliance, and prompt input
constraints need to be addressed for effective implementation in practical
recommender systems. To address these challenges, we propose a model-agnostic
framework RLMRec that aims to enhance existing recommenders with LLM-empowered
representation learning. It proposes a recommendation paradigm that integrates
representation learning with LLMs to capture intricate semantic aspects of user
behaviors and preferences. RLMRec incorporates auxiliary textual signals,
develops a user/item profiling paradigm empowered by LLMs, and aligns the
semantic space of LLMs with the representation space of collaborative
relational signals through a cross-view alignment framework. This work further
establish a theoretical foundation demonstrating that incorporating textual
signals through mutual information maximization enhances the quality of
representations. In our evaluation, we integrate RLMRec with state-of-the-art
recommender models, while also analyzing its efficiency and robustness to noise
data. Our implementation codes are available at
https://github.com/HKUDS/RLMRec.Comment: Published as a WWW'24 full pape
Self-Supervised Learning for Recommender Systems: A Survey
In recent years, neural architecture-based recommender systems have achieved
tremendous success, but they still fall short of expectation when dealing with
highly sparse data. Self-supervised learning (SSL), as an emerging technique
for learning from unlabeled data, has attracted considerable attention as a
potential solution to this issue. This survey paper presents a systematic and
timely review of research efforts on self-supervised recommendation (SSR).
Specifically, we propose an exclusive definition of SSR, on top of which we
develop a comprehensive taxonomy to divide existing SSR methods into four
categories: contrastive, generative, predictive, and hybrid. For each category,
we elucidate its concept and formulation, the involved methods, as well as its
pros and cons. Furthermore, to facilitate empirical comparison, we release an
open-source library SELFRec (https://github.com/Coder-Yu/SELFRec), which
incorporates a wide range of SSR models and benchmark datasets. Through
rigorous experiments using this library, we derive and report some significant
findings regarding the selection of self-supervised signals for enhancing
recommendation. Finally, we shed light on the limitations in the current
research and outline the future research directions.Comment: 20 pages. Accepted by TKD
Deep Learning based Recommender System: A Survey and New Perspectives
With the ever-growing volume of online information, recommender systems have
been an effective strategy to overcome such information overload. The utility
of recommender systems cannot be overstated, given its widespread adoption in
many web applications, along with its potential impact to ameliorate many
problems related to over-choice. In recent years, deep learning has garnered
considerable interest in many research fields such as computer vision and
natural language processing, owing not only to stellar performance but also the
attractive property of learning feature representations from scratch. The
influence of deep learning is also pervasive, recently demonstrating its
effectiveness when applied to information retrieval and recommender systems
research. Evidently, the field of deep learning in recommender system is
flourishing. This article aims to provide a comprehensive review of recent
research efforts on deep learning based recommender systems. More concretely,
we provide and devise a taxonomy of deep learning based recommendation models,
along with providing a comprehensive summary of the state-of-the-art. Finally,
we expand on current trends and provide new perspectives pertaining to this new
exciting development of the field.Comment: The paper has been accepted by ACM Computing Surveys.
https://doi.acm.org/10.1145/328502
Enhancing Rock Image Segmentation in Digital Rock Physics: A Fusion of Generative AI and State-of-the-Art Neural Networks
In digital rock physics, analysing microstructures from CT and SEM scans is
crucial for estimating properties like porosity and pore connectivity.
Traditional segmentation methods like thresholding and CNNs often fall short in
accurately detailing rock microstructures and are prone to noise. U-Net
improved segmentation accuracy but required many expert-annotated samples, a
laborious and error-prone process due to complex pore shapes. Our study
employed an advanced generative AI model, the diffusion model, to overcome
these limitations. This model generated a vast dataset of CT/SEM and binary
segmentation pairs from a small initial dataset. We assessed the efficacy of
three neural networks: U-Net, Attention-U-net, and TransUNet, for segmenting
these enhanced images. The diffusion model proved to be an effective data
augmentation technique, improving the generalization and robustness of deep
learning models. TransU-Net, incorporating Transformer structures, demonstrated
superior segmentation accuracy and IoU metrics, outperforming both U-Net and
Attention-U-net. Our research advances rock image segmentation by combining the
diffusion model with cutting-edge neural networks, reducing dependency on
extensive expert data and boosting segmentation accuracy and robustness.
TransU-Net sets a new standard in digital rock physics, paving the way for
future geoscience and engineering breakthroughs
- …