705 research outputs found
Beautiful and damned. Combined effect of content quality and social ties on user engagement
User participation in online communities is driven by the intertwinement of
the social network structure with the crowd-generated content that flows along
its links. These aspects are rarely explored jointly and at scale. By looking
at how users generate and access pictures of varying beauty on Flickr, we
investigate how the production of quality impacts the dynamics of online social
systems. We develop a deep learning computer vision model to score images
according to their aesthetic value and we validate its output through
crowdsourcing. By applying it to over 15B Flickr photos, we study for the first
time how image beauty is distributed over a large-scale social system.
Beautiful images are evenly distributed in the network, although only a small
core of people get social recognition for them. To study the impact of exposure
to quality on user engagement, we set up matching experiments aimed at
detecting causality from observational data. Exposure to beauty is
double-edged: following people who produce high-quality content increases one's
probability of uploading better photos; however, an excessive imbalance between
the quality generated by a user and the user's neighbors leads to a decline in
engagement. Our analysis has practical implications for improving link
recommender systems.Comment: 13 pages, 12 figures, final version published in IEEE Transactions on
Knowledge and Data Engineering (Volume: PP, Issue: 99
Visualization for Recommendation Explainability: A Survey and New Perspectives
Providing system-generated explanations for recommendations represents an
important step towards transparent and trustworthy recommender systems.
Explainable recommender systems provide a human-understandable rationale for
their outputs. Over the last two decades, explainable recommendation has
attracted much attention in the recommender systems research community. This
paper aims to provide a comprehensive review of research efforts on visual
explanation in recommender systems. More concretely, we systematically review
the literature on explanations in recommender systems based on four dimensions,
namely explanation goal, explanation scope, explanation style, and explanation
format. Recognizing the importance of visualization, we approach the
recommender system literature from the angle of explanatory visualizations,
that is using visualizations as a display style of explanation. As a result, we
derive a set of guidelines that might be constructive for designing explanatory
visualizations in recommender systems and identify perspectives for future work
in this field. The aim of this review is to help recommendation researchers and
practitioners better understand the potential of visually explainable
recommendation research and to support them in the systematic design of visual
explanations in current and future recommender systems.Comment: Updated version Nov. 2023, 36 page
Personalised Visual Art Recommendation by Learning Latent Semantic Representations
In Recommender systems, data representation techniques play a great role as
they have the power to entangle, hide and reveal explanatory factors embedded
within datasets. Hence, they influence the quality of recommendations.
Specifically, in Visual Art (VA) recommendations the complexity of the concepts
embodied within paintings, makes the task of capturing semantics by machines
far from trivial. In VA recommendation, prominent works commonly use manually
curated metadata to drive recommendations. Recent works in this domain aim at
leveraging visual features extracted using Deep Neural Networks (DNN). However,
such data representation approaches are resource demanding and do not have a
direct interpretation, hindering user acceptance. To address these limitations,
we introduce an approach for Personalised Recommendation of Visual arts based
on learning latent semantic representation of paintings. Specifically, we
trained a Latent Dirichlet Allocation (LDA) model on textual descriptions of
paintings. Our LDA model manages to successfully uncover non-obvious semantic
relationships between paintings whilst being able to offer explainable
recommendations. Experimental evaluations demonstrate that our method tends to
perform better than exploiting visual features extracted using pre-trained Deep
Neural Networks.Comment: Accepted at SMAP202
An Image Dataset for Benchmarking Recommender Systems with Raw Pixels
Recommender systems (RS) have achieved significant success by leveraging
explicit identification (ID) features. However, the full potential of content
features, especially the pure image pixel features, remains relatively
unexplored. The limited availability of large, diverse, and content-driven
image recommendation datasets has hindered the use of raw images as item
representations. In this regard, we present PixelRec, a massive image-centric
recommendation dataset that includes approximately 200 million user-image
interactions, 30 million users, and 400,000 high-quality cover images. By
providing direct access to raw image pixels, PixelRec enables recommendation
models to learn item representation directly from them. To demonstrate its
utility, we begin by presenting the results of several classical pure ID-based
baseline models, termed IDNet, trained on PixelRec. Then, to show the
effectiveness of the dataset's image features, we substitute the itemID
embeddings (from IDNet) with a powerful vision encoder that represents items
using their raw image pixels. This new model is dubbed PixelNet.Our findings
indicate that even in standard, non-cold start recommendation settings where
IDNet is recognized as highly effective, PixelNet can already perform equally
well or even better than IDNet. Moreover, PixelNet has several other notable
advantages over IDNet, such as being more effective in cold-start and
cross-domain recommendation scenarios. These results underscore the importance
of visual features in PixelRec. We believe that PixelRec can serve as a
critical resource and testing ground for research on recommendation models that
emphasize image pixel content. The dataset, code, and leaderboard will be
available at https://github.com/westlake-repl/PixelRec
NFTs to MARS: Multi-Attention Recommender System for NFTs
Recommender systems have become essential tools for enhancing user
experiences across various domains. While extensive research has been conducted
on recommender systems for movies, music, and e-commerce, the rapidly growing
and economically significant Non-Fungible Token (NFT) market remains
underexplored. The unique characteristics and increasing prominence of the NFT
market highlight the importance of developing tailored recommender systems to
cater to its specific needs and unlock its full potential. In this paper, we
examine the distinctive characteristics of NFTs and propose the first
recommender system specifically designed to address NFT market challenges. In
specific, we develop a Multi-Attention Recommender System for NFTs (NFT-MARS)
with three key characteristics: (1) graph attention to handle sparse user-item
interactions, (2) multi-modal attention to incorporate feature preference of
users, and (3) multi-task learning to consider the dual nature of NFTs as both
artwork and financial assets. We demonstrate the effectiveness of NFT-MARS
compared to various baseline models using the actual transaction data of NFTs
collected directly from blockchain for four of the most popular NFT
collections. The source code and data are available at
https://anonymous.4open.science/r/RecSys2023-93ED
- …