8 research outputs found
Counterfactual Collaborative Reasoning
Causal reasoning and logical reasoning are two important types of reasoning
abilities for human intelligence. However, their relationship has not been
extensively explored under machine intelligence context. In this paper, we
explore how the two reasoning abilities can be jointly modeled to enhance both
accuracy and explainability of machine learning models. More specifically, by
integrating two important types of reasoning ability -- counterfactual
reasoning and (neural) logical reasoning -- we propose Counterfactual
Collaborative Reasoning (CCR), which conducts counterfactual logic reasoning to
improve the performance. In particular, we use recommender system as an example
to show how CCR alleviate data scarcity, improve accuracy and enhance
transparency. Technically, we leverage counterfactual reasoning to generate
"difficult" counterfactual training examples for data augmentation, which --
together with the original training examples -- can enhance the model
performance. Since the augmented data is model irrelevant, they can be used to
enhance any model, enabling the wide applicability of the technique. Besides,
most of the existing data augmentation methods focus on "implicit data
augmentation" over users' implicit feedback, while our framework conducts
"explicit data augmentation" over users explicit feedback based on
counterfactual logic reasoning. Experiments on three real-world datasets show
that CCR achieves better performance than non-augmented models and implicitly
augmented models, and also improves model transparency by generating
counterfactual explanations
Neuro-Symbolic Recommendation Model based on Logic Query
A recommendation system assists users in finding items that are relevant to
them. Existing recommendation models are primarily based on predicting
relationships between users and items and use complex matching models or
incorporate extensive external information to capture association patterns in
data. However, recommendation is not only a problem of inductive statistics
using data; it is also a cognitive task of reasoning decisions based on
knowledge extracted from information. Hence, a logic system could naturally be
incorporated for the reasoning in a recommendation task. However, although
hard-rule approaches based on logic systems can provide powerful reasoning
ability, they struggle to cope with inconsistent and incomplete knowledge in
real-world tasks, especially for complex tasks such as recommendation.
Therefore, in this paper, we propose a neuro-symbolic recommendation model,
which transforms the user history interactions into a logic expression and then
transforms the recommendation prediction into a query task based on this logic
expression. The logic expressions are then computed based on the modular logic
operations of the neural network. We also construct an implicit logic encoder
to reasonably reduce the complexity of the logic computation. Finally, a user's
interest items can be queried in the vector space based on the computation
results. Experiments on three well-known datasets verified that our method
performs better compared to state of the art shallow, deep, session, and
reasoning models.Comment: 17 pages, 6 figure
Neural-Symbolic Recommendation with Graph-Enhanced Information
The recommendation system is not only a problem of inductive statistics from
data but also a cognitive task that requires reasoning ability. The most
advanced graph neural networks have been widely used in recommendation systems
because they can capture implicit structured information from graph-structured
data. However, like most neural network algorithms, they only learn matching
patterns from a perception perspective. Some researchers use user behavior for
logic reasoning to achieve recommendation prediction from the perspective of
cognitive reasoning, but this kind of reasoning is a local one and ignores
implicit information on a global scale. In this work, we combine the advantages
of graph neural networks and propositional logic operations to construct a
neuro-symbolic recommendation model with both global implicit reasoning ability
and local explicit logic reasoning ability. We first build an item-item graph
based on the principle of adjacent interaction and use graph neural networks to
capture implicit information in global data. Then we transform user behavior
into propositional logic expressions to achieve recommendations from the
perspective of cognitive reasoning. Extensive experiments on five public
datasets show that our proposed model outperforms several state-of-the-art
methods, source code is avaliable at [https://github.com/hanzo2020/GNNLR].Comment: 12 pages, 2 figures, conferenc
Causal Collaborative Filtering
Recommender systems are important and valuable tools for many personalized
services. Collaborative Filtering (CF) algorithms -- among others -- are
fundamental algorithms driving the underlying mechanism of personalized
recommendation. Many of the traditional CF algorithms are designed based on the
fundamental idea of mining or learning correlative patterns from data for
matching, including memory-based methods such as user/item-based CF as well as
learning-based methods such as matrix factorization and deep learning models.
However, advancing from correlative learning to causal learning is an important
problem, because causal/counterfactual modeling can help us to think outside of
the observational data for user modeling and personalization. In this paper, we
propose Causal Collaborative Filtering (CCF) -- a general framework for
modeling causality in collaborative filtering and recommendation. We first
provide a unified causal view of CF and mathematically show that many of the
traditional CF algorithms are actually special cases of CCF under simplified
causal graphs. We then propose a conditional intervention approach for
-calculus so that we can estimate the causal relations based on
observational data. Finally, we further propose a general counterfactual
constrained learning framework for estimating the user-item preferences.
Experiments are conducted on two types of real-world datasets -- traditional
and randomized trial data -- and results show that our framework can improve
the recommendation performance of many CF algorithms.Comment: 14 pages, 5 figures, 3 table
Fairness in Recommendation: Foundations, Methods and Applications
As one of the most pervasive applications of machine learning, recommender
systems are playing an important role on assisting human decision making. The
satisfaction of users and the interests of platforms are closely related to the
quality of the generated recommendation results. However, as a highly
data-driven system, recommender system could be affected by data or algorithmic
bias and thus generate unfair results, which could weaken the reliance of the
systems. As a result, it is crucial to address the potential unfairness
problems in recommendation settings. Recently, there has been growing attention
on fairness considerations in recommender systems with more and more literature
on approaches to promote fairness in recommendation. However, the studies are
rather fragmented and lack a systematic organization, thus making it difficult
to penetrate for new researchers to the domain. This motivates us to provide a
systematic survey of existing works on fairness in recommendation. This survey
focuses on the foundations for fairness in recommendation literature. It first
presents a brief introduction about fairness in basic machine learning tasks
such as classification and ranking in order to provide a general overview of
fairness research, as well as introduce the more complex situations and
challenges that need to be considered when studying fairness in recommender
systems. After that, the survey will introduce fairness in recommendation with
a focus on the taxonomies of current fairness definitions, the typical
techniques for improving fairness, as well as the datasets for fairness studies
in recommendation. The survey also talks about the challenges and opportunities
in fairness research with the hope of promoting the fair recommendation
research area and beyond.Comment: Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST