978 research outputs found

    A temporal-focused trustworthiness to enhance trust-based recommender systems

    Get PDF
    Collaborative Filtering (CF) is the most successful technology for recommender systems. The technology does not rely on actual content of the items, but instead requires users to indicate preferences, most commonly in the form of ratings. While CF is known for its traditional problems such as cold-start, sparsity and modest accuracy, a trust-based CF has been previously proposed to solve such issues by focusing on trust values among the users. Nonetheless, all existing trust-based approaches use trust as a factor independent from scope, whether explicit or implicit. We argue that trustworthiness should not be the same across all conditions; hence the trust values should change to suit certain scope or focused area. To validate the proposed temporal-focused trustworthiness in this paper, we propose a novel pheromone-based approach to calculate trustworthiness by focusing on time factor. Implementation of the proposed approach is hoped to reduce cold-start and sparsity as well as improve accuracy of the recommendation results

    Recommendations based on social links

    Get PDF
    The goal of this chapter is to give an overview of recent works on the development of social link-based recommender systems and to offer insights on related issues, as well as future directions for research. Among several kinds of social recommendations, this chapter focuses on recommendations, which are based on users’ self-defined (i.e., explicit) social links and suggest items, rather than people of interest. The chapter starts by reviewing the needs for social link-based recommendations and studies that explain the viability of social networks as useful information sources. Following that, the core part of the chapter dissects and examines modern research on social link-based recommendations along several dimensions. It concludes with a discussion of several important issues and future directions for social link-based recommendation research

    A Framework for Exploiting Internet of Things for Context-Aware Trust-based Personalized Services

    Get PDF
    In the last years, we have witnessed the introduction of Internet of Things as an integral part of the Internet with billions of interconnected and addressable everyday objects. On the one hand, these objects generate massive volume of data that can be exploited to gain useful insights into our day-to-day needs. On the other hand, context-aware recommender systems (CARSs) are intelligent systems that assist users to make service consumption choices that satisfy their preferences based on their contextual situations. However, one of the major challenges in developing CARSs is the lack of functionality providing dynamic and reliable context information required by the recommendation decision process based on the objects that users interact with in their environments. Thus, contextual information obtained from IoT objects and other sources can be exploited to build CARSs that satisfy users’ preferences, improve quality of experience and recommendation accuracy. This article describes various components of a conceptual IoT based framework for context-aware personalized recommendations. The framework addresses the weakness whereby CARSs rely on static and limited contextual information from user’s mobile phone, by providing additional components for reliable and dynamic contextual information, using IoT context sources. The core of the framework consists of context recognition and reasoning management, dynamic user profile model incorporating trust to improve accuracy of context-aware personalized recommendations. Experimental evaluations show that incorporating context and trust in personalized recommendations can improve its accuracy

    Information Systems Research Themes: A Seventeen-year Data-driven Temporal Analysis

    Get PDF
    Extending the research on our discipline’s identity, we examine how the major research themes have evolved in four top IS journals: Management Information Systems Quarterly (MISQ), Information Systems Research (ISR), Journal of the Association for Information Systems (JAIS), and Journal of Management Information Systems (JMIS). By doing so, we answer Palvia, Daneshvar Kakhki, Ghoshal, Uppala, and Wang’s (2015) call to provide continuous updates to the research trends in IS due to the discipline’s dynamism. Second, building on Sidorov, Evangelopoulos, Valacich, and Ramakrishnan (2008) we examine temporal trends in prominent research streams over the last 17 years. We show that, as IS research evolves over time, certain themes appear to endure the test of time, while others peak and trough. More importantly, our analysis identifies new emergent themes that have begun to gain prominence in IS research community. Further, we break down our findings by journal and show the type of content that they may desire most. Our findings also allow the IS research community to discern the specific contributions and roles of our premier journals in the evolution of research themes over time

    Trust aware recommender system with distrust in different views of trusted users

    Get PDF
    No AbstractKeywords: recommender system; collaborative filtering; trust aware; distrus

    Visualization for Recommendation Explainability: A Survey and New Perspectives

    Full text link
    Providing system-generated explanations for recommendations represents an important step towards transparent and trustworthy recommender systems. Explainable recommender systems provide a human-understandable rationale for their outputs. Over the last two decades, explainable recommendation has attracted much attention in the recommender systems research community. This paper aims to provide a comprehensive review of research efforts on visual explanation in recommender systems. More concretely, we systematically review the literature on explanations in recommender systems based on four dimensions, namely explanation goal, explanation scope, explanation style, and explanation format. Recognizing the importance of visualization, we approach the recommender system literature from the angle of explanatory visualizations, that is using visualizations as a display style of explanation. As a result, we derive a set of guidelines that might be constructive for designing explanatory visualizations in recommender systems and identify perspectives for future work in this field. The aim of this review is to help recommendation researchers and practitioners better understand the potential of visually explainable recommendation research and to support them in the systematic design of visual explanations in current and future recommender systems.Comment: Updated version Nov. 2023, 36 page

    To whom to explain and what? : Systematic literature review on empirical studies on Explainable Artificial Intelligence (XAI)

    Get PDF
    Expectations towards artificial intelligence (AI) have risen continuously because of machine learning models’ evolution. However, the models’ decisions are often not intuitively understandable. For this reason, the field of Explainable AI (XAI) has emerged, which tries to create different techniques to help users understand AI better. As AI’s use spreads more broadly in society, it becomes like a co-worker that people need to understand. For this reason, AI-human interaction in research is of broad and current interest. This thesis outlines the current empirical XAI research literature themes from the human-computer interaction (HCI) perspective. This study's method is an explorative, systematic literature review carried out following the PRISMA (Preferred Research Items for Systematic Reviews) method. In total, 29 articles that concluded an empirical study into XAI from the HCI perspective were included in the review. The material was collected based on database searches and snowball sampling. The articles were analyzed based on their descriptive statistics, stakeholder groups, research questions, and theoretical approaches. This study aims to determine what factors made users consider XAI transparent, explainable, or trustworthy and to whom the XAI research was intended. Based on the analysis, three stakeholder groups to whom the current XAI literature was aimed for emerged: end-users, domain experts, and developers. This study’s findings show that domain experts’ needs towards XAI vary greatly between domains, whereas developers need better tools to create XAI systems. The end-users, on their part, considered case-based explanations unfair and wanted to have explanations that “speak their language”. Also, the results indicate that the effect of current XAI solutions on users’ trust towards AI systems is relatively small or even non-existing. The studies’ direct theoretical contributions and the number of theoretical lenses used were both found out to be relatively low. This thesis’s most immense contribution is to provide a synthesis of the extant empirical XAI literature from the HCI perspective, which previous studies have rarely brought together. Continuing this thesis, researchers can further investigate research avenues such as explanation quality methodologies, algorithm auditing methods, users’ mental models, and prior conceptions about AI.Odotukset tekoälyä kohtaan ovat kohonneet jatkuvasti koneoppimismallien kehittymisen vuoksi. Mallien tekemät päätökset eivät usein ole ihmiskäyttäjälle vaistonvaraisesti ymmärrettävissä. Tätä ongelmaa ratkomaan on syntynyt selittävän tekoälyn tutkimuskenttä, joka luo erilaisia tekniikoita käyttäjien ymmärryksen tueksi. Kun tekoälyn käyttö yhteiskunnassa yleistyy laajemmin, tulee siitä ikään kuin työkaveri, jota ihmisten tulee ymmärtää. Tästä syystä tekoälyn ja ihmisen välisen vuorovaikutuksen tutkiminen on nyt laajan mielenkiinnon kohteena. Tässä pro gradu -tutkielmassa hahmotellaan selittävän tekoälyn tutkimuskentän ajankohtaisia teemoja, ihmisen ja tietokoneen välisen vuorovaikutuksen näkökulmasta. Tutkielman metodi on tutkiva, systemaattinen kirjallisuuskatsaus, ja se suoritettiin seuraten PRISMA-ohjeistusta. Katsaukseen valikoitui yhteensä 29 ihmisen ja tietokoneen vuorovaikutuksen näkökulmasta selittävää tekoälyä empiirisesti tutkinutta artikkelia. Aineisto kerättiin tietokantahakujen ja lumipallo-otannan avulla. Tutkimuksia eriteltiin artikkeleja kuvailevien tietojen, niiden kohdeyleisön, tutkimuskysymysten sekä teoreettisten lähestymistapojen kautta. Tutkielman tarkoituksena on selvittää, millaiset tekijät saivat käyttäjät pitämään tekoälyä läpinäkyvänä, selitettävissä olevana tai luotettavana, sekä kenelle aihepiirin tutkimus oli suunnattu. Analyysin perusteella löytyi kolme ryhmää, joille nykyistä kirjallisuutta on suunnattu: loppukäyttäjät, toimialojen asiantuntijat sekä tekoälyn kehittäjät. Tutkielman tulokset osoittavat, että asiantuntijoiden tarpeet selittävää tekoälyä kohtaan vaihtelevat laajasti toimialojen välillä, kun taas sen kehittäjät kaipaisivat parempia työkaluja tuekseen. Loppukäyttäjien havaittiin pitävän tekoälyn antamia tapauskohtaisia esimerkkejä epäreiluina, ja haluavan juuri heitä puhuttelevia selityksiä. Tulokset ilmaisevat, että nykyisten selittävien tekoälytekniikoiden vaikutukset käyttäjien luottamukseen tekoälyä kohtaan ovat vähäisiä. Tutkimusten tieteellisen panosten ja niiden käyttämien teoreettisten näkökulmien määrän havaittiin olevan suhteellisen pieniä. Tämän tutkielman suurin tieteellinen panos on luoda yhteenveto empiiriseen, selittävän tekoälyn tutkimuskirjallisuuteen, ihmisen ja tietokoneen välisen vuorovaikutuksen näkökulmasta. Tätä näkökulmaa aiempi kirjallisuus on vain harvoin saattanut kokoon. Tutkielma avaa useita näkymiä jatkotutkimukselle, esimerkiksi selitysten laatumetodien, algoritmien auditointimenetelmien, käyttäjien ajatusmallien sekä aiempien käsitysten vaikutusten näkökulmista

    Explainability in Music Recommender Systems

    Full text link
    The most common way to listen to recorded music nowadays is via streaming platforms which provide access to tens of millions of tracks. To assist users in effectively browsing these large catalogs, the integration of Music Recommender Systems (MRSs) has become essential. Current real-world MRSs are often quite complex and optimized for recommendation accuracy. They combine several building blocks based on collaborative filtering and content-based recommendation. This complexity can hinder the ability to explain recommendations to end users, which is particularly important for recommendations perceived as unexpected or inappropriate. While pure recommendation performance often correlates with user satisfaction, explainability has a positive impact on other factors such as trust and forgiveness, which are ultimately essential to maintain user loyalty. In this article, we discuss how explainability can be addressed in the context of MRSs. We provide perspectives on how explainability could improve music recommendation algorithms and enhance user experience. First, we review common dimensions and goals of recommenders' explainability and in general of eXplainable Artificial Intelligence (XAI), and elaborate on the extent to which these apply -- or need to be adapted -- to the specific characteristics of music consumption and recommendation. Then, we show how explainability components can be integrated within a MRS and in what form explanations can be provided. Since the evaluation of explanation quality is decoupled from pure accuracy-based evaluation criteria, we also discuss requirements and strategies for evaluating explanations of music recommendations. Finally, we describe the current challenges for introducing explainability within a large-scale industrial music recommender system and provide research perspectives.Comment: To appear in AI Magazine, Special Topic on Recommender Systems 202

    Understanding User Perceptions of Trustworthiness in E-recruitment Systems

    Get PDF
    Algorithmic systems are increasingly deployed to make decisions that people used to make. Perceptions of these systems can significantly influence their adoption, yet, broadly speaking, users’ understanding of the internal working of these systems is limited. To explore users’ perceptions of algorithmic systems, we developed a prototype e-recruitment system called Algorithm Playground where we offer the users a look behind the scenes of such systems, and provide “how” and “why” explanations on how job applicants are ranked by their algorithms. Using an online study with 110 participants, we measured perceived fairness, transparency and trustworthiness of e-recruitment systems. Our results show that user understanding of the data and reasoning behind candidates’ rankings and selection evoked some positive attitudes as participants rated our platform to be fairer, more reliable, transparent and trustworthy than the e-recruitment systems they have used in the past
    corecore