2 research outputs found
How Expressive are Graph Neural Networks in Recommendation?
Graph Neural Networks (GNNs) have demonstrated superior performance on
various graph learning tasks, including recommendation, where they leverage
user-item collaborative filtering signals in graphs. However, theoretical
formulations of their capability are scarce, despite their empirical
effectiveness in state-of-the-art recommender models. Recently, research has
explored the expressiveness of GNNs in general, demonstrating that message
passing GNNs are at most as powerful as the Weisfeiler-Lehman test, and that
GNNs combined with random node initialization are universal. Nevertheless, the
concept of "expressiveness" for GNNs remains vaguely defined. Most existing
works adopt the graph isomorphism test as the metric of expressiveness, but
this graph-level task may not effectively assess a model's ability in
recommendation, where the objective is to distinguish nodes of different
closeness. In this paper, we provide a comprehensive theoretical analysis of
the expressiveness of GNNs in recommendation, considering three levels of
expressiveness metrics: graph isomorphism (graph-level), node automorphism
(node-level), and topological closeness (link-level). We propose the
topological closeness metric to evaluate GNNs' ability to capture the
structural distance between nodes, which aligns closely with the objective of
recommendation. To validate the effectiveness of this new metric in evaluating
recommendation performance, we introduce a learning-less GNN algorithm that is
optimal on the new metric and can be optimal on the node-level metric with
suitable modification. We conduct extensive experiments comparing the proposed
algorithm against various types of state-of-the-art GNN models to explore the
explainability of the new metric in the recommendation task. For
reproducibility, implementation codes are available at
https://github.com/HKUDS/GTE.Comment: 32nd ACM International Conference on Information and Knowledge
Management (CIKM) 202
SSLRec: A Self-Supervised Learning Framework for Recommendation
Self-supervised learning (SSL) has gained significant interest in recent
years as a solution to address the challenges posed by sparse and noisy data in
recommender systems. Despite the growing number of SSL algorithms designed to
provide state-of-the-art performance in various recommendation scenarios (e.g.,
graph collaborative filtering, sequential recommendation, social
recommendation, KG-enhanced recommendation), there is still a lack of unified
frameworks that integrate recommendation algorithms across different domains.
Such a framework could serve as the cornerstone for self-supervised
recommendation algorithms, unifying the validation of existing methods and
driving the design of new ones. To address this gap, we introduce SSLRec, a
novel benchmark platform that provides a standardized, flexible, and
comprehensive framework for evaluating various SSL-enhanced recommenders. The
SSLRec framework features a modular architecture that allows users to easily
evaluate state-of-the-art models and a complete set of data augmentation and
self-supervised toolkits to help create SSL recommendation models with specific
needs. Furthermore, SSLRec simplifies the process of training and evaluating
different recommendation models with consistent and fair settings. Our SSLRec
platform covers a comprehensive set of state-of-the-art SSL-enhanced
recommendation models across different scenarios, enabling researchers to
evaluate these cutting-edge models and drive further innovation in the field.
Our implemented SSLRec framework is available at the source code repository
https://github.com/HKUDS/SSLRec.Comment: Published as a WSDM'24 full paper (oral presentation