49 research outputs found

    QUARE: 1st Workshop on Measuring the Quality of Explanations in Recommender Systems

    Get PDF
    QUARE - measuring the QUality of explAnations in REcommender systems - is the first workshop that aims to promote discussion upon future research and practice directions around evaluation methodologies for explanations in recommender systems. To that end, we bring together researchers and practitioners from academia and industry to facilitate discussions about the main issues and best practices in the respective areas, identify possible synergies, and outline priorities regarding future research directions. Additionally, we want to stimulate reflections around methods to systematically and holistically assess explanation approaches, impact, and goals, at the interplay between organisational and human values. The homepage of the workshop is available at: https: //sites.google.com/view/quare-2022/

    Someone really wanted that song but it was not me!: Evaluating Which Information to Disclose in Explanations for Group Recommendations

    No full text
    Explanations can be used to supply transparency in recommender systems (RSs). However, when presenting a shared explanation to a group, we need to balance users' need for privacy with their need for transparency. This is particularly challenging when group members have highly diverging tastes and individuals are confronted with items they do not like, for the benefit of the group. This paper investigates which information people would like to disclose in explanations for group recommendations in the music domain

    Someone really wanted that song but it was not me!: Evaluating Which Information to Disclose in Explanations for Group Recommendations

    No full text
    Explanations can be used to supply transparency in recommender systems (RSs). However, when presenting a shared explanation to a group, we need to balance users' need for privacy with their need for transparency. This is particularly challenging when group members have highly diverging tastes and individuals are confronted with items they do not like, for the benefit of the group. This paper investigates which information people would like to disclose in explanations for group recommendations in the music domain.Web Information System

    Capturing the ineffable: Collecting, analysing, and automating web document quality assessments

    No full text
    Automatic estimation of the quality of Web documents is a challenging task, especially because the definition of quality heavily depends on the individuals who define it, on the context where it applies, and on the nature of the tasks at hand. Our long-term goal is to allow automatic assessment of Web document quality tailored to specific user requirements and context. This process relies on the possibility to identify document characteristics that indicate their quality. In this paper, we investigate these characteristics as follows: (1) we define features of Web documents that may be indicators of quality; (2) we design a procedure for automatically extracting those features; (3) develop a Web application to present these results to niche users to check the relevance of these features as quality indicators and collect quality assessments; (4) we analyse user’s qualitative assessment of Web documents to refine our definition of the features that determine quality, and establish their relevant weight in the overall quality, i.e., in the summarizing score users attribute to a document, determining whether it meets their standards or not. Hence, our contribution is threefold: a Web application for nichesourcing quality assessments; a curated dataset ofWeb document assessments; and a thorough analysis of the quality assessments collected by means of two case studies involving experts (journalists and media scholars). The dataset obtained is limited in size but highly valuable because of the quality of the experts that provided it. Our analyses show that: (1) it is possible to automate the process of Web document quality estimation to a level of high accuracy; (2) document features shown in isolation are poorly informative to users; and (3) related to the tasks we propose (i.e., choosing Web documents to use as a source for writing an article on the vaccination debate), the most important quality dimensions are accuracy, trustworthiness, and precision

    Operationalizing Framing to Support Multiperspective Recommendations of Opinion Pieces

    No full text
    Diversity in personalized news recommender systems is often defined as dissimilarity, and operationalized based on topic diversity (e.g., corona versus farmers strike). Diversity in news media, however, is understood as multiperspectivity (e.g., different opinions on corona measures), and arguably a key responsibility of the press in a democratic society. While viewpoint diversity is often considered synonymous with source diversity in communication science domain, in this paper,we take a computational view.We operationalize the notion of framing, adopted from communication science. We apply this notion to a re-ranking of topic-relevant recommended lists, to form the basis of a novel viewpoint diversification method. Our offline evaluation indicates that the proposed method is capable of enhancing the viewpoint diversity of recommendation lists according to a diversity metric from literature. In an online study, on the Blendle platform, a Dutch news aggregator, with more than 2000 users, we found that users are willing to consume viewpoint diverse news recommendations.We also found that presentation characteristics significantly influence the reading behaviour of diverse recommendations. These results suggest that future research on presentation aspects of recommendations can be just as important as novel viewpoint diversification methods to truly achieve multiperspectivity in online news environments.Web Information System

    Workshop on Explainable User Models and Personalized Systems (ExUM 2021)

    No full text
    Adaptive and personalized systems have become pervasive technologies that are gradually playing an increasingly important role in our daily lives. Indeed, we are now used to interact every day with algorithms that help us in several scenarios, ranging from services that suggest us music to be listened to or movies to be watched, to personal assistants able to proactively support us in complex decision-making tasks. As the importance of such technologies in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. Unfortunately, the current research tends to go in the opposite direction, since most of the approaches try to maximize the effectiveness of the personalization strategy (e.g., recommendation accuracy) at the expense of the explainability and the transparency of the model. The main research questions which arise from this scenario is simple and straightforward: How can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability? The workshop aims to provide a forum for discussing such problems, challenges, and innovative research approaches in the area, by investigating the role of transparency and explainability on the recent methodologies for building user models or developing personalized and adaptive systems

    Humans Disagree With the IoU for Measuring Object Detector Localization Error

    No full text
    The localization quality of automatic object detectors is typically evaluated by the Intersection over Union (IoU) score. In this work, we show that humans have a different view on localization quality. To evaluate this, we conduct a survey with more than 70 participants. Results show that for localization errors with the exact same IoU score, humans might not consider that these errors are equal, and express a preference. Our work is the first to evaluate IoU with humans and makes it clear that relying on IoU scores alone to evaluate localization errors might not be sufficient.Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Pattern Recognition and BioinformaticsWeb Information System

    Provenance-driven Representation of Crowdsourcing Data for Efficient Data Analysis

    No full text
    Crowdsourcing has proved to be a feasible way of harnessing human computation for solving complex problems. However, crowdsourcing frequently faces various challenges: data handling, task reusability, and platform selection. Domain scientists rely on eScientists to find solutions for these challenges. CrowdTruth is a framework that builds on existing crowdsourcing platforms and provides an enhanced way to manage crowdsourcing tasks across platforms, offering solutions to commonly faced challenges. Provenance modeling proves means for documenting and examining scientific workflows. CrowdTruth keeps a provenance trace of the data flow through the framework, thus allowing to trace how data was transformed and by whom to reach its final state. In this way, eScientists have a tool to determine the impact that crowdsourcing has on enhancing their data

    5th Workshop on Explainable User Models and Personalised Systems (ExUM)

    No full text
    Adaptive and personalized systems have become pervasive technologies, gradually playing an increasingly important role in our daily lives. Indeed, we are now used to interacting with algorithms that help us in several scenarios, ranging from services that suggest music or movies to personal assistants who proactively support us in complex decision-making tasks. As the importance of such technologies in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. It is not by chance that the EU General Data Protection Regulation (GDPR) emphasized the users' right to explanation when people face intelligent systems. Unfortunately, current research tends to go in the opposite direction since most of the approaches try to maximize the effectiveness of the personalization strategy (e.g., recommendation accuracy) at the expense of model explainability. The workshop aims to provide a forum for discussing problems, challenges, and innovative research approaches in this area by investigating the role of transparency and explainability in recent methodologies for building user models or developing personalized and adaptive systems
    corecore