9 research outputs found

    Someone really wanted that song but it was not me!: Evaluating Which Information to Disclose in Explanations for Group Recommendations

    No full text
    Explanations can be used to supply transparency in recommender systems (RSs). However, when presenting a shared explanation to a group, we need to balance users' need for privacy with their need for transparency. This is particularly challenging when group members have highly diverging tastes and individuals are confronted with items they do not like, for the benefit of the group. This paper investigates which information people would like to disclose in explanations for group recommendations in the music domain.Web Information System

    Operationalizing Framing to Support Multiperspective Recommendations of Opinion Pieces

    No full text
    Diversity in personalized news recommender systems is often defined as dissimilarity, and operationalized based on topic diversity (e.g., corona versus farmers strike). Diversity in news media, however, is understood as multiperspectivity (e.g., different opinions on corona measures), and arguably a key responsibility of the press in a democratic society. While viewpoint diversity is often considered synonymous with source diversity in communication science domain, in this paper,we take a computational view.We operationalize the notion of framing, adopted from communication science. We apply this notion to a re-ranking of topic-relevant recommended lists, to form the basis of a novel viewpoint diversification method. Our offline evaluation indicates that the proposed method is capable of enhancing the viewpoint diversity of recommendation lists according to a diversity metric from literature. In an online study, on the Blendle platform, a Dutch news aggregator, with more than 2000 users, we found that users are willing to consume viewpoint diverse news recommendations.We also found that presentation characteristics significantly influence the reading behaviour of diverse recommendations. These results suggest that future research on presentation aspects of recommendations can be just as important as novel viewpoint diversification methods to truly achieve multiperspectivity in online news environments.Web Information System

    Humans Disagree With the IoU for Measuring Object Detector Localization Error

    No full text
    The localization quality of automatic object detectors is typically evaluated by the Intersection over Union (IoU) score. In this work, we show that humans have a different view on localization quality. To evaluate this, we conduct a survey with more than 70 participants. Results show that for localization errors with the exact same IoU score, humans might not consider that these errors are equal, and express a preference. Our work is the first to evaluate IoU with humans and makes it clear that relying on IoU scores alone to evaluate localization errors might not be sufficient.Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Pattern Recognition and BioinformaticsWeb Information System

    Comprehensive viewpoint representations for a deeper understanding of user interactions with debated topics

    Get PDF
    Research in the area of human information interaction (HII) typically represents viewpoints on debated topics in a binary fashion, as either against or in favor of a given topic (e.g., the feminist movement). This simple taxonomy, however, greatly reduces the latent richness of viewpoints and thereby limits the potential of research and practical applications in this field. Work in the communication sciences has already demonstrated that viewpoints can be represented in much more comprehensive ways, which could enable a deeper understanding of users' interactions with debated topics online. For instance, a viewpoint's stance usually has a degree of strength (e.g., mild or strong), and, even if two viewpoints support or oppose something to the same degree, they may use different logics of evaluation (i.e., underlying reasons). In this paper, we draw from communication science practice to propose a novel, two-dimensional way of representing viewpoints that incorporates a viewpoint's stance degree as well as its logic of evaluation. We show in a case study of tweets on debated topics how our proposed viewpoint label can be obtained via crowdsourcing with acceptable reliability. By analyzing the resulting data set and conducting a user study, we further show that the two-dimensional viewpoint representation we propose allows for more meaningful analyses and diversification interventions compared to current approaches. Finally, we discuss what this novel viewpoint label implies for HII research and how obtaining it may be made cheaper in the future. Web Information System

    You Do Not Decide for Me!: Evaluating Explainable Group Aggregation Strategies for Tourism

    No full text
    Most recommender systems propose items to individual users. However, in domains such as tourism, people often consume items in groups rather than individually. Different individual preferences in such a group can be difficult to resolve, and often compromises need to be made. Social choice strategies can be used to aggregate the preferences of individuals. We evaluated two explainable modified preference aggregation strategies in a between-subject study (n=200), and compared them with two baseline strategies for groups that are also explainable, in two scenarios: high divergence (group members with different travel preferences) and low divergence (group members with similar travel preferences). Generally, all investigated aggregation strategies performed well in terms of perceived individual and group satisfaction and perceived fairness. The results also indicate that participants were sensitive to a dictator-based strategy, which affected both their individual and group satisfaction negatively (compared to the other strategies).Web Information System

    Design Implications for Explanations: A Case Study on Supporting Reflective Assessment of Potentially Misleading Videos

    No full text
    Online videos have become a prevalent means for people to acquire information. Videos, however, are often polarized, misleading, or contain topics on which people have different, contradictory views. In this work, we introduce natural language explanations to stimulate more deliberate reasoning about videos and raise users’ awareness of potentially deceiving or biased information. With these explanations, we aim to support users in actively deciding and reflecting on the usefulness of the videos. We generate the explanations through an end-to-end pipeline that extracts reflection triggers so users receive additional information to the video based on its source, covered topics, communicated emotions, and sentiment. In a between-subjects user study, we examine the effect of showing the explanations for videos on three controversial topics. Besides, we assess the users’ alignment with the video’s message and how strong their belief is about the topic. Our results indicate that respondents’ alignment with the video’s message is critical to evaluate the video’s usefulness. Overall, the explanations were found to be useful and of high quality. While the explanations do not influence the perceived usefulness of the videos compared to only seeing the video, people with an extreme negative alignment with a video’s message perceived it as less useful (with or without explanations) and felt more confident in their assessment. We relate our findings to cognitive dissonance since users seem to be less receptive to explanations when the video’s message strongly challenges their beliefs. Given these findings, we provide a set of design implications for explanations grounded in theories on reducing cognitive dissonance in light of raising awareness about online deception.Web Information System

    A Checklist to Combat Cognitive Biases in Crowdsourcing

    Get PDF
    Recent research has demonstrated that cognitive biases such as the confirmation bias or the anchoring effect can negatively affect the quality of crowdsourced data. In practice, however, such biases go unnoticed unless specifically assessed or controlled for. Task requesters need to ensure that task workflow and design choices do not trigger workers’ cognitive biases. Moreover, to facilitate the reuse of crowdsourced data collections, practitioners can benefit from understanding whether and which cognitive biases may be associated with the data. To this end, we propose a 12-item checklist adapted from business psychology to combat cognitive biases in crowdsourcing. We demonstrate the practical application of this checklist in a case study on viewpoint annotations for search results. Through a retrospective analysis of relevant crowdsourcing research that has been published at HCOMP in 2018, 2019, and 2020, we show that cognitive biases may often affect crowd workers but are typically not considered as potential sources of poor data quality. The checklist we propose is a practical tool that requesters can use to improve their task designs and appropriately describe potential limitations of collected data. It contributes to a body of efforts towards making human-labeled data more reliable and reusable.Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Web Information System

    Workshop on Explainable User Models and Personalized Systems (ExUM 2021)

    No full text
    Adaptive and personalized systems have become pervasive technologies that are gradually playing an increasingly important role in our daily lives. Indeed, we are now used to interact every day with algorithms that help us in several scenarios, ranging from services that suggest us music to be listened to or movies to be watched, to personal assistants able to proactively support us in complex decision-making tasks. As the importance of such technologies in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. Unfortunately, the current research tends to go in the opposite direction, since most of the approaches try to maximize the effectiveness of the personalization strategy (e.g., recommendation accuracy) at the expense of the explainability and the transparency of the model. The main research questions which arise from this scenario is simple and straightforward: How can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability? The workshop aims to provide a forum for discussing such problems, challenges, and innovative research approaches in the area, by investigating the role of transparency and explainability on the recent methodologies for building user models or developing personalized and adaptive systems. Web Information System

    A Survey of Crowdsourcing in Medical Image Analysis

    Get PDF
    Rapid advances in image processing capabilities have been seen across many domains, fostered by the application of machine learning algorithms to "big-data". However, within the realm of medical image analysis, advances have been curtailed, in part, due to the limited availability of large-scale, well-annotated datasets. One of the main reasons for this is the high cost often associated with producing large amounts of high-quality meta-data. Recently, there has been growing interest in the application of crowdsourcing for this purpose; a technique that has proven effective for creating large-scale datasets across a range of disciplines, from computer vision to astrophysics. Despite the growing popularity of this approach, there has not yet been a comprehensive literature review to provide guidance to researchers considering using crowdsourcing methodologies in their own medical imaging analysis. In this survey, we review studies applying crowdsourcing to the analysis of medical images, published prior to July 2018. We identify common approaches, challenges and considerations, providing guidance of utility to researchers adopting this approach. Finally, we discuss future opportunities for development within this emerging domain.Web Information System
    corecore