1,808 research outputs found
How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility
Recommendation systems are ubiquitous and impact many domains; they have the
potential to influence product consumption, individuals' perceptions of the
world, and life-altering decisions. These systems are often evaluated or
trained with data from users already exposed to algorithmic recommendations;
this creates a pernicious feedback loop. Using simulations, we demonstrate how
using data confounded in this way homogenizes user behavior without increasing
utility
Beautiful and damned. Combined effect of content quality and social ties on user engagement
User participation in online communities is driven by the intertwinement of
the social network structure with the crowd-generated content that flows along
its links. These aspects are rarely explored jointly and at scale. By looking
at how users generate and access pictures of varying beauty on Flickr, we
investigate how the production of quality impacts the dynamics of online social
systems. We develop a deep learning computer vision model to score images
according to their aesthetic value and we validate its output through
crowdsourcing. By applying it to over 15B Flickr photos, we study for the first
time how image beauty is distributed over a large-scale social system.
Beautiful images are evenly distributed in the network, although only a small
core of people get social recognition for them. To study the impact of exposure
to quality on user engagement, we set up matching experiments aimed at
detecting causality from observational data. Exposure to beauty is
double-edged: following people who produce high-quality content increases one's
probability of uploading better photos; however, an excessive imbalance between
the quality generated by a user and the user's neighbors leads to a decline in
engagement. Our analysis has practical implications for improving link
recommender systems.Comment: 13 pages, 12 figures, final version published in IEEE Transactions on
Knowledge and Data Engineering (Volume: PP, Issue: 99
Evolution of Ego-networks in Social Media with Link Recommendations
Ego-networks are fundamental structures in social graphs, yet the process of
their evolution is still widely unexplored. In an online context, a key
question is how link recommender systems may skew the growth of these networks,
possibly restraining diversity. To shed light on this matter, we analyze the
complete temporal evolution of 170M ego-networks extracted from Flickr and
Tumblr, comparing links that are created spontaneously with those that have
been algorithmically recommended. We find that the evolution of ego-networks is
bursty, community-driven, and characterized by subsequent phases of explosive
diameter increase, slight shrinking, and stabilization. Recommendations favor
popular and well-connected nodes, limiting the diameter expansion. With a
matching experiment aimed at detecting causal relationships from observational
data, we find that the bias introduced by the recommendations fosters global
diversity in the process of neighbor selection. Last, with two link prediction
experiments, we show how insights from our analysis can be used to improve the
effectiveness of social recommender systems.Comment: Proceedings of the 10th ACM International Conference on Web Search
and Data Mining (WSDM 2017), Cambridge, UK. 10 pages, 16 figures, 1 tabl
Hierarchical Attention Network for Visually-aware Food Recommendation
Food recommender systems play an important role in assisting users to
identify the desired food to eat. Deciding what food to eat is a complex and
multi-faceted process, which is influenced by many factors such as the
ingredients, appearance of the recipe, the user's personal preference on food,
and various contexts like what had been eaten in the past meals. In this work,
we formulate the food recommendation problem as predicting user preference on
recipes based on three key factors that determine a user's choice on food,
namely, 1) the user's (and other users') history; 2) the ingredients of a
recipe; and 3) the descriptive image of a recipe. To address this challenging
problem, we develop a dedicated neural network based solution Hierarchical
Attention based Food Recommendation (HAFR) which is capable of: 1) capturing
the collaborative filtering effect like what similar users tend to eat; 2)
inferring a user's preference at the ingredient level; and 3) learning user
preference from the recipe's visual images. To evaluate our proposed method, we
construct a large-scale dataset consisting of millions of ratings from
AllRecipes.com. Extensive experiments show that our method outperforms several
competing recommender solutions like Factorization Machine and Visual Bayesian
Personalized Ranking with an average improvement of 12%, offering promising
results in predicting user preference for food. Codes and dataset will be
released upon acceptance
A Survey on Fairness-aware Recommender Systems
As information filtering services, recommender systems have extremely
enriched our daily life by providing personalized suggestions and facilitating
people in decision-making, which makes them vital and indispensable to human
society in the information era. However, as people become more dependent on
them, recent studies show that recommender systems potentially own
unintentional impacts on society and individuals because of their unfairness
(e.g., gender discrimination in job recommendations). To develop trustworthy
services, it is crucial to devise fairness-aware recommender systems that can
mitigate these bias issues. In this survey, we summarise existing methodologies
and practices of fairness in recommender systems. Firstly, we present concepts
of fairness in different recommendation scenarios, comprehensively categorize
current advances, and introduce typical methods to promote fairness in
different stages of recommender systems. Next, after introducing datasets and
evaluation metrics applied to assess the fairness of recommender systems, we
will delve into the significant influence that fairness-aware recommender
systems exert on real-world industrial applications. Subsequently, we highlight
the connection between fairness and other principles of trustworthy recommender
systems, aiming to consider trustworthiness principles holistically while
advocating for fairness. Finally, we summarize this review, spotlighting
promising opportunities in comprehending concepts, frameworks, the balance
between accuracy and fairness, and the ties with trustworthiness, with the
ultimate goal of fostering the development of fairness-aware recommender
systems.Comment: 27 pages, 9 figure
An analysis of popularity biases in recommender system evaluation and algorithms
Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 03-10-2019Las tecnologías de recomendación han ido progresivamente extendiendo su presencia en las aplicaciones y servicios de uso diario. Los sistemas de recomendación buscan realizar sugerencias individualizadas de productos u opciones que los usuarios puedan encontrar interesantes o útiles. Implícita en el concepto de recomendación está la idea de que las sugerencias más satisfactorias para cada usuario son aquellas que tienen en cuenta sus gustos particulares, por lo que cabría esperar que los algoritmos de recomendación más eficaces sean los más personalizados. Sin embargo, se ha observado recientemente que recomendar simplemente los productos más populares no resulta una estrategia mucho peor que los mejores y más sofisticados algoritmos personalizados, y más aún, que estos tienden a sesgar sus recomendaciones hacia opciones mayoritarias. Por todo ello, es rele-vante entender en qué medida y bajo qué circunstancias es la popularidad una señal real-mente efectiva a la hora de recomendar, y si su aparente efectividad se debe a la existencia de ciertos sesgos en las metodologías de evaluación offline actuales, como todo parece indicar, o no.
En esta tesis abordamos esta cuestión desde un punto de vista plenamente formal, identificando los factores que pueden determinar la respuesta y modelizándolos en térmi-nos de dependencias probabilísticas entre variables aleatorias, tales como la votación, el descubrimiento y la relevancia. De esta forma, caracterizamos situaciones concretas que garantizan que la popularidad sea efectiva o que no lo sea, y establecemos las condiciones bajo las cuales pueden existir contradicciones entre el acierto observado y el real. Las principales conclusiones hacen referencia a escenarios simplificados prototípicos, más allá de los cuales el análisis formal concluye que cualquier resultado es posible. Para profun-dizar en el escenario general sin suposiciones tan simplificadas, estudiamos un caso parti-cular donde el descubrimiento de ítems es consecuencia de la interacción entre usuarios en una red social.
Además, en esta tesis proporcionamos una explicación formal del sesgo de populari-dad que presentan los algoritmos de filtrado colaborativo. Para ello, desarrollamos una versión probabilística del algoritmo de vecinos próximos kNN. Dicha versión evidencia además la condición fundamental que hace que kNN produzca recomendaciones perso-nalizadas y se diferencie de la popularidad pura
Towards Responsible Media Recommendation
Reading or viewing recommendations are a common feature on modern media sites. What is shown to consumers as recommendations is nowadays often automatically determined by AI algorithms, typically with the goal of helping consumers discover relevant content more easily. However, the highlighting or filtering of information that comes with such recommendations may lead to undesired effects on consumers or even society, for example, when an algorithm leads to the creation of filter bubbles or amplifies the spread of misinformation. These well-documented phenomena create a need for improved mechanisms for responsible media recommendation, which avoid such negative effects of recommender systems. In this research note, we review the threats and challenges that may result from the use of automated media recommendation technology, and we outline possible steps to mitigate such undesired societal effects in the future.publishedVersio
- …