1,388 research outputs found

    Towards Responsible Media Recommendation

    Get PDF
    Reading or viewing recommendations are a common feature on modern media sites. What is shown to consumers as recommendations is nowadays often automatically determined by AI algorithms, typically with the goal of helping consumers discover relevant content more easily. However, the highlighting or filtering of information that comes with such recommendations may lead to undesired effects on consumers or even society, for example, when an algorithm leads to the creation of filter bubbles or amplifies the spread of misinformation. These well-documented phenomena create a need for improved mechanisms for responsible media recommendation, which avoid such negative effects of recommender systems. In this research note, we review the threats and challenges that may result from the use of automated media recommendation technology, and we outline possible steps to mitigate such undesired societal effects in the future.publishedVersio

    The Limits of Popularity-Based Recommendations, and the Role of Social Ties

    Get PDF
    In this paper we introduce a mathematical model that captures some of the salient features of recommender systems that are based on popularity and that try to exploit social ties among the users. We show that, under very general conditions, the market always converges to a steady state, for which we are able to give an explicit form. Thanks to this we can tell rather precisely how much a market is altered by a recommendation system, and determine the power of users to influence others. Our theoretical results are complemented by experiments with real world social networks showing that social graphs prevent large market distortions in spite of the presence of highly influential users.Comment: 10 pages, 9 figures, KDD 201

    A personality-aware group recommendation system based on pairwise preferences

    Get PDF
    Human personality plays a crucial role in decision-making and it has paramount importance when individuals negotiate with each other to reach a common group decision. Such situations are conceivable, for instance, when a group of individuals want to watch a movie together. It is well known that people influence each other’s decisions, the more assertive a person is, the more influence they will have on the final decision. In order to obtain a more realistic group recommendation system (GRS), we need to accommodate the assertiveness of the different group members’ personalities. Although pairwise preferences are long-established in group decision-making (GDM), they have received very little attention in the recommendation systems community. Driven by the advantages of pairwise preferences on ratings in the recommendation systems domain, we have further pursued this approach in this paper, however we have done so for GRS. We have devised a three-stage approach to GRS in which we 1) resort to three binary matrix factorization methods, 2) develop an influence graph that includes assertiveness and cooperativeness as personality traits, and 3) apply an opinion dynamics model in order to reach consensus. We have shown that the final opinion is related to the stationary distribution of a Markov chain associated with the influence graph. Our experimental results demonstrate that our approach results in high precision and fairness.Spanish Government PID2019-10380RBI00/AEI/10. 13039/501100011033Andalusian Government P20_0067

    To whom to explain and what? : Systematic literature review on empirical studies on Explainable Artificial Intelligence (XAI)

    Get PDF
    Expectations towards artificial intelligence (AI) have risen continuously because of machine learning models’ evolution. However, the models’ decisions are often not intuitively understandable. For this reason, the field of Explainable AI (XAI) has emerged, which tries to create different techniques to help users understand AI better. As AI’s use spreads more broadly in society, it becomes like a co-worker that people need to understand. For this reason, AI-human interaction in research is of broad and current interest. This thesis outlines the current empirical XAI research literature themes from the human-computer interaction (HCI) perspective. This study's method is an explorative, systematic literature review carried out following the PRISMA (Preferred Research Items for Systematic Reviews) method. In total, 29 articles that concluded an empirical study into XAI from the HCI perspective were included in the review. The material was collected based on database searches and snowball sampling. The articles were analyzed based on their descriptive statistics, stakeholder groups, research questions, and theoretical approaches. This study aims to determine what factors made users consider XAI transparent, explainable, or trustworthy and to whom the XAI research was intended. Based on the analysis, three stakeholder groups to whom the current XAI literature was aimed for emerged: end-users, domain experts, and developers. This study’s findings show that domain experts’ needs towards XAI vary greatly between domains, whereas developers need better tools to create XAI systems. The end-users, on their part, considered case-based explanations unfair and wanted to have explanations that “speak their language”. Also, the results indicate that the effect of current XAI solutions on users’ trust towards AI systems is relatively small or even non-existing. The studies’ direct theoretical contributions and the number of theoretical lenses used were both found out to be relatively low. This thesis’s most immense contribution is to provide a synthesis of the extant empirical XAI literature from the HCI perspective, which previous studies have rarely brought together. Continuing this thesis, researchers can further investigate research avenues such as explanation quality methodologies, algorithm auditing methods, users’ mental models, and prior conceptions about AI.Odotukset tekoälyä kohtaan ovat kohonneet jatkuvasti koneoppimismallien kehittymisen vuoksi. Mallien tekemät päätökset eivät usein ole ihmiskäyttäjälle vaistonvaraisesti ymmärrettävissä. Tätä ongelmaa ratkomaan on syntynyt selittävän tekoälyn tutkimuskenttä, joka luo erilaisia tekniikoita käyttäjien ymmärryksen tueksi. Kun tekoälyn käyttö yhteiskunnassa yleistyy laajemmin, tulee siitä ikään kuin työkaveri, jota ihmisten tulee ymmärtää. Tästä syystä tekoälyn ja ihmisen välisen vuorovaikutuksen tutkiminen on nyt laajan mielenkiinnon kohteena. Tässä pro gradu -tutkielmassa hahmotellaan selittävän tekoälyn tutkimuskentän ajankohtaisia teemoja, ihmisen ja tietokoneen välisen vuorovaikutuksen näkökulmasta. Tutkielman metodi on tutkiva, systemaattinen kirjallisuuskatsaus, ja se suoritettiin seuraten PRISMA-ohjeistusta. Katsaukseen valikoitui yhteensä 29 ihmisen ja tietokoneen vuorovaikutuksen näkökulmasta selittävää tekoälyä empiirisesti tutkinutta artikkelia. Aineisto kerättiin tietokantahakujen ja lumipallo-otannan avulla. Tutkimuksia eriteltiin artikkeleja kuvailevien tietojen, niiden kohdeyleisön, tutkimuskysymysten sekä teoreettisten lähestymistapojen kautta. Tutkielman tarkoituksena on selvittää, millaiset tekijät saivat käyttäjät pitämään tekoälyä läpinäkyvänä, selitettävissä olevana tai luotettavana, sekä kenelle aihepiirin tutkimus oli suunnattu. Analyysin perusteella löytyi kolme ryhmää, joille nykyistä kirjallisuutta on suunnattu: loppukäyttäjät, toimialojen asiantuntijat sekä tekoälyn kehittäjät. Tutkielman tulokset osoittavat, että asiantuntijoiden tarpeet selittävää tekoälyä kohtaan vaihtelevat laajasti toimialojen välillä, kun taas sen kehittäjät kaipaisivat parempia työkaluja tuekseen. Loppukäyttäjien havaittiin pitävän tekoälyn antamia tapauskohtaisia esimerkkejä epäreiluina, ja haluavan juuri heitä puhuttelevia selityksiä. Tulokset ilmaisevat, että nykyisten selittävien tekoälytekniikoiden vaikutukset käyttäjien luottamukseen tekoälyä kohtaan ovat vähäisiä. Tutkimusten tieteellisen panosten ja niiden käyttämien teoreettisten näkökulmien määrän havaittiin olevan suhteellisen pieniä. Tämän tutkielman suurin tieteellinen panos on luoda yhteenveto empiiriseen, selittävän tekoälyn tutkimuskirjallisuuteen, ihmisen ja tietokoneen välisen vuorovaikutuksen näkökulmasta. Tätä näkökulmaa aiempi kirjallisuus on vain harvoin saattanut kokoon. Tutkielma avaa useita näkymiä jatkotutkimukselle, esimerkiksi selitysten laatumetodien, algoritmien auditointimenetelmien, käyttäjien ajatusmallien sekä aiempien käsitysten vaikutusten näkökulmista

    Group Modeling : selecting a sequence of television items to suit a group of viewers

    Get PDF
    Peer reviewedPostprin

    How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility

    Full text link
    Recommendation systems are ubiquitous and impact many domains; they have the potential to influence product consumption, individuals' perceptions of the world, and life-altering decisions. These systems are often evaluated or trained with data from users already exposed to algorithmic recommendations; this creates a pernicious feedback loop. Using simulations, we demonstrate how using data confounded in this way homogenizes user behavior without increasing utility

    Filter Bubbles in Recommender Systems: Fact or Fallacy -- A Systematic Review

    Full text link
    A filter bubble refers to the phenomenon where Internet customization effectively isolates individuals from diverse opinions or materials, resulting in their exposure to only a select set of content. This can lead to the reinforcement of existing attitudes, beliefs, or conditions. In this study, our primary focus is to investigate the impact of filter bubbles in recommender systems. This pioneering research aims to uncover the reasons behind this problem, explore potential solutions, and propose an integrated tool to help users avoid filter bubbles in recommender systems. To achieve this objective, we conduct a systematic literature review on the topic of filter bubbles in recommender systems. The reviewed articles are carefully analyzed and classified, providing valuable insights that inform the development of an integrated approach. Notably, our review reveals evidence of filter bubbles in recommendation systems, highlighting several biases that contribute to their existence. Moreover, we propose mechanisms to mitigate the impact of filter bubbles and demonstrate that incorporating diversity into recommendations can potentially help alleviate this issue. The findings of this timely review will serve as a benchmark for researchers working in interdisciplinary fields such as privacy, artificial intelligence ethics, and recommendation systems. Furthermore, it will open new avenues for future research in related domains, prompting further exploration and advancement in this critical area.Comment: 21 pages, 10 figures and 5 table
    corecore