24 research outputs found
Rating and aspect-based opinion graph embeddings for explainable recommendations
The success of neural network embeddings has entailed a renewed interest in
using knowledge graphs for a wide variety of machine learning and information
retrieval tasks. In particular, recent recommendation methods based on graph
embeddings have shown state-of-the-art performance. In general, these methods
encode latent rating patterns and content features. Differently from previous
work, in this paper, we propose to exploit embeddings extracted from graphs
that combine information from ratings and aspect-based opinions expressed in
textual reviews. We then adapt and evaluate state-of-the-art graph embedding
techniques over graphs generated from Amazon and Yelp reviews on six domains,
outperforming baseline recommenders. Additionally, our method has the advantage
of providing explanations that involve the coverage of aspect-based opinions
given by users about recommended items.Comment: arXiv admin note: substantial text overlap with arXiv:2107.0322
Dual Node and Edge Fairness-Aware Graph Partition
Fair graph partition of social networks is a crucial step toward ensuring
fair and non-discriminatory treatments in unsupervised user analysis. Current
fair partition methods typically consider node balance, a notion pursuing a
proportionally balanced number of nodes from all demographic groups, but ignore
the bias induced by imbalanced edges in each cluster. To address this gap, we
propose a notion edge balance to measure the proportion of edges connecting
different demographic groups in clusters. We analyze the relations between node
balance and edge balance, then with line graph transformations, we propose a
co-embedding framework to learn dual node and edge fairness-aware
representations for graph partition. We validate our framework through several
social network datasets and observe balanced partition in terms of both nodes
and edges along with good utility. Moreover, we demonstrate our fair partition
can be used as pseudo labels to facilitate graph neural networks to behave
fairly in node classification and link prediction tasks
Graphing else matters: exploiting aspect opinions and ratings in explainable graph-based recommendations
The success of neural network embeddings has entailed a renewed interest in
using knowledge graphs for a wide variety of machine learning and information
retrieval tasks. In particular, current recommendation methods based on graph
embeddings have shown state-of-the-art performance. These methods commonly
encode latent rating patterns and content features. Different from previous
work, in this paper, we propose to exploit embeddings extracted from graphs
that combine information from ratings and aspect-based opinions expressed in
textual reviews. We then adapt and evaluate state-of-the-art graph embedding
techniques over graphs generated from Amazon and Yelp reviews on six domains,
outperforming baseline recommenders. Our approach has the advantage of
providing explanations which leverage aspect-based opinions given by users
about recommended items. Furthermore, we also provide examples of the
applicability of recommendations utilizing aspect opinions as explanations in a
visualization dashboard, which allows obtaining information about the most and
least liked aspects of similar users obtained from the embeddings of an input
graph
FMMRec: Fairness-aware Multimodal Recommendation
Recently, multimodal recommendations have gained increasing attention for
effectively addressing the data sparsity problem by incorporating
modality-based representations. Although multimodal recommendations excel in
accuracy, the introduction of different modalities (e.g., images, text, and
audio) may expose more users' sensitive information (e.g., gender and age) to
recommender systems, resulting in potentially more serious unfairness issues.
Despite many efforts on fairness, existing fairness-aware methods are either
incompatible with multimodal scenarios, or lead to suboptimal fairness
performance due to neglecting sensitive information of multimodal content. To
achieve counterfactual fairness in multimodal recommendations, we propose a
novel fairness-aware multimodal recommendation approach (dubbed as FMMRec) to
disentangle the sensitive and non-sensitive information from modal
representations and leverage the disentangled modal representations to guide
fairer representation learning. Specifically, we first disentangle biased and
filtered modal representations by maximizing and minimizing their sensitive
attribute prediction ability respectively. With the disentangled modal
representations, we mine the modality-based unfair and fair (corresponding to
biased and filtered) user-user structures for enhancing explicit user
representation with the biased and filtered neighbors from the corresponding
structures, followed by adversarially filtering out sensitive information.
Experiments on two real-world public datasets demonstrate the superiority of
our FMMRec relative to the state-of-the-art baselines. Our source code is
available at https://anonymous.4open.science/r/FMMRec
TO EXPLAIN OR NOT TO EXPLAIN: AN EMPIRICAL INVESTIGATION OF AI-BASED RECOMMENDATIONS ON SOCIAL MEDIA PLATFORMS
AI-based social media recommendations have a great potential to improve user experience. However, often these recommendations do not match the user interest and create an unpleasant experience for the users. Moreover, the recommendation system being blackbox creates comprehensibility and transparency issues. This paper investigates social media recommendations from an end-user perspective. For the investigation, we used the popular social media platform Facebook and recruited regular users to conduct a qualitative analysis. We asked participants about the social media content suggestions, their comprehensibility, and explainability. Our analysis shows users mostly require explanation whenever they encounter unfamiliar content and to ensure their online data security. Furthermore, the users require concise, non-technical explanations along with the facility of controlled information flow. In addition, we observed that explanations impact the user’s perception of transparency, trust, and understandability. Finally, we have outlined some design implications and presented a synthesized framework based on our data analysis
Counterfactual Collaborative Reasoning
Causal reasoning and logical reasoning are two important types of reasoning
abilities for human intelligence. However, their relationship has not been
extensively explored under machine intelligence context. In this paper, we
explore how the two reasoning abilities can be jointly modeled to enhance both
accuracy and explainability of machine learning models. More specifically, by
integrating two important types of reasoning ability -- counterfactual
reasoning and (neural) logical reasoning -- we propose Counterfactual
Collaborative Reasoning (CCR), which conducts counterfactual logic reasoning to
improve the performance. In particular, we use recommender system as an example
to show how CCR alleviate data scarcity, improve accuracy and enhance
transparency. Technically, we leverage counterfactual reasoning to generate
"difficult" counterfactual training examples for data augmentation, which --
together with the original training examples -- can enhance the model
performance. Since the augmented data is model irrelevant, they can be used to
enhance any model, enabling the wide applicability of the technique. Besides,
most of the existing data augmentation methods focus on "implicit data
augmentation" over users' implicit feedback, while our framework conducts
"explicit data augmentation" over users explicit feedback based on
counterfactual logic reasoning. Experiments on three real-world datasets show
that CCR achieves better performance than non-augmented models and implicitly
augmented models, and also improves model transparency by generating
counterfactual explanations
Interactive Contrastive Learning for Self-supervised Entity Alignment
Self-supervised entity alignment (EA) aims to link equivalent entities across
different knowledge graphs (KGs) without seed alignments. The current SOTA
self-supervised EA method draws inspiration from contrastive learning,
originally designed in computer vision based on instance discrimination and
contrastive loss, and suffers from two shortcomings. Firstly, it puts
unidirectional emphasis on pushing sampled negative entities far away rather
than pulling positively aligned pairs close, as is done in the well-established
supervised EA. Secondly, KGs contain rich side information (e.g., entity
description), and how to effectively leverage those information has not been
adequately investigated in self-supervised EA. In this paper, we propose an
interactive contrastive learning model for self-supervised EA. The model
encodes not only structures and semantics of entities (including entity name,
entity description, and entity neighborhood), but also conducts cross-KG
contrastive learning by building pseudo-aligned entity pairs. Experimental
results show that our approach outperforms previous best self-supervised
results by a large margin (over 9% average improvement) and performs on par
with previous SOTA supervised counterparts, demonstrating the effectiveness of
the interactive contrastive learning for self-supervised EA.Comment: Accepted by CIKM 202