110 research outputs found
Learning over Knowledge-Base Embeddings for Recommendation
State-of-the-art recommendation algorithms -- especially the collaborative
filtering (CF) based approaches with shallow or deep models -- usually work
with various unstructured information sources for recommendation, such as
textual reviews, visual images, and various implicit or explicit feedbacks.
Though structured knowledge bases were considered in content-based approaches,
they have been largely neglected recently due to the availability of vast
amount of data, and the learning power of many complex models.
However, structured knowledge bases exhibit unique advantages in personalized
recommendation systems. When the explicit knowledge about users and items is
considered for recommendation, the system could provide highly customized
recommendations based on users' historical behaviors. A great challenge for
using knowledge bases for recommendation is how to integrated large-scale
structured and unstructured data, while taking advantage of collaborative
filtering for highly accurate performance. Recent achievements on knowledge
base embedding sheds light on this problem, which makes it possible to learn
user and item representations while preserving the structure of their
relationship with external knowledge. In this work, we propose to reason over
knowledge base embeddings for personalized recommendation. Specifically, we
propose a knowledge base representation learning approach to embed
heterogeneous entities for recommendation. Experimental results on real-world
dataset verified the superior performance of our approach compared with
state-of-the-art baselines
EEG-SVRec: An EEG Dataset with User Multidimensional Affective Engagement Labels in Short Video Recommendation
In recent years, short video platforms have gained widespread popularity,
making the quality of video recommendations crucial for retaining users.
Existing recommendation systems primarily rely on behavioral data, which faces
limitations when inferring user preferences due to issues such as data sparsity
and noise from accidental interactions or personal habits. To address these
challenges and provide a more comprehensive understanding of user affective
experience and cognitive activity, we propose EEG-SVRec, the first EEG dataset
with User Multidimensional Affective Engagement Labels in Short Video
Recommendation. The study involves 30 participants and collects 3,657
interactions, offering a rich dataset that can be used for a deeper exploration
of user preference and cognitive activity. By incorporating selfassessment
techniques and real-time, low-cost EEG signals, we offer a more detailed
understanding user affective experiences (valence, arousal, immersion,
interest, visual and auditory) and the cognitive mechanisms behind their
behavior. We establish benchmarks for rating prediction by the recommendation
algorithm, showing significant improvement with the inclusion of EEG signals.
Furthermore, we demonstrate the potential of this dataset in gaining insights
into the affective experience and cognitive activity behind user behaviors in
recommender systems. This work presents a novel perspective for enhancing short
video recommendation by leveraging the rich information contained in EEG
signals and multidimensional affective engagement scores, paving the way for
future research in short video recommendation systems
A Situation-aware Enhancer for Personalized Recommendation
When users interact with Recommender Systems (RecSys), current situations,
such as time, location, and environment, significantly influence their
preferences. Situations serve as the background for interactions, where
relationships between users and items evolve with situation changes. However,
existing RecSys treat situations, users, and items on the same level. They can
only model the relations between situations and users/items respectively,
rather than the dynamic impact of situations on user-item associations (i.e.,
user preferences). In this paper, we provide a new perspective that takes
situations as the preconditions for users' interactions. This perspective
allows us to separate situations from user/item representations, and capture
situations' influences over the user-item relationship, offering a more
comprehensive understanding of situations. Based on it, we propose a novel
Situation-Aware Recommender Enhancer (SARE), a pluggable module to integrate
situations into various existing RecSys. Since users' perception of situations
and situations' impact on preferences are both personalized, SARE includes a
Personalized Situation Fusion (PSF) and a User-Conditioned Preference Encoder
(UCPE) to model the perception and impact of situations, respectively. We
conduct experiments of applying SARE on seven backbones in various settings on
two real-world datasets. Experimental results indicate that SARE improves the
recommendation performances significantly compared with backbones and SOTA
situation-aware baselines.Comment: Accepted at the International Conference on Database Systems for
Advanced Applications (DASFAA 2024
WMFormer++: Nested Transformer for Visible Watermark Removal via Implict Joint Learning
Watermarking serves as a widely adopted approach to safeguard media
copyright. In parallel, the research focus has extended to watermark removal
techniques, offering an adversarial means to enhance watermark robustness and
foster advancements in the watermarking field. Existing watermark removal
methods mainly rely on UNet with task-specific decoder branches--one for
watermark localization and the other for background image restoration. However,
watermark localization and background restoration are not isolated tasks;
precise watermark localization inherently implies regions necessitating
restoration, and the background restoration process contributes to more
accurate watermark localization. To holistically integrate information from
both branches, we introduce an implicit joint learning paradigm. This empowers
the network to autonomously navigate the flow of information between implicit
branches through a gate mechanism. Furthermore, we employ cross-channel
attention to facilitate local detail restoration and holistic structural
comprehension, while harnessing nested structures to integrate multi-scale
information. Extensive experiments are conducted on various challenging
benchmarks to validate the effectiveness of our proposed method. The results
demonstrate our approach's remarkable superiority, surpassing existing
state-of-the-art methods by a large margin
Common Sense Enhanced Knowledge-based Recommendation with Large Language Model
Knowledge-based recommendation models effectively alleviate the data sparsity
issue leveraging the side information in the knowledge graph, and have achieved
considerable performance. Nevertheless, the knowledge graphs used in previous
work, namely metadata-based knowledge graphs, are usually constructed based on
the attributes of items and co-occurring relations (e.g., also buy), in which
the former provides limited information and the latter relies on sufficient
interaction data and still suffers from cold start issue. Common sense, as a
form of knowledge with generality and universality, can be used as a supplement
to the metadata-based knowledge graph and provides a new perspective for
modeling users' preferences. Recently, benefiting from the emergent world
knowledge of the large language model, efficient acquisition of common sense
has become possible. In this paper, we propose a novel knowledge-based
recommendation framework incorporating common sense, CSRec, which can be
flexibly coupled to existing knowledge-based methods. Considering the challenge
of the knowledge gap between the common sense-based knowledge graph and
metadata-based knowledge graph, we propose a knowledge fusion approach based on
mutual information maximization theory. Experimental results on public datasets
demonstrate that our approach significantly improves the performance of
existing knowledge-based recommendation models.Comment: Accepted by DASFAA 202
Sequential Recommendation with Latent Relations based on Large Language Model
Sequential recommender systems predict items that may interest users by
modeling their preferences based on historical interactions. Traditional
sequential recommendation methods rely on capturing implicit collaborative
filtering signals among items. Recent relation-aware sequential recommendation
models have achieved promising performance by explicitly incorporating item
relations into the modeling of user historical sequences, where most relations
are extracted from knowledge graphs. However, existing methods rely on manually
predefined relations and suffer the sparsity issue, limiting the generalization
ability in diverse scenarios with varied item relations. In this paper, we
propose a novel relation-aware sequential recommendation framework with Latent
Relation Discovery (LRD). Different from previous relation-aware models that
rely on predefined rules, we propose to leverage the Large Language Model (LLM)
to provide new types of relations and connections between items. The motivation
is that LLM contains abundant world knowledge, which can be adopted to mine
latent relations of items for recommendation. Specifically, inspired by that
humans can describe relations between items using natural language, LRD
harnesses the LLM that has demonstrated human-like knowledge to obtain language
knowledge representations of items. These representations are fed into a latent
relation discovery module based on the discrete state variational autoencoder
(DVAE). Then the self-supervised relation discovery tasks and recommendation
tasks are jointly optimized. Experimental results on multiple public datasets
demonstrate our proposed latent relations discovery method can be incorporated
with existing relation-aware sequential recommendation models and significantly
improve the performance. Further analysis experiments indicate the
effectiveness and reliability of the discovered latent relations.Comment: Accepted by SIGIR 202
- …