6 research outputs found

    A Study on User-Controllable Social Exploratory Search

    Get PDF
    Information-seeking tasks with learning or investigative purposes are usually referred to as exploratory search. Exploratory search unfolds as a dynamic process where the user, amidst navigation, trial-and-error and on-the-fly selections, gathers and organizes information (resources). A range of innovative interfaces with increased user control have been developed to support exploratory search process. In this work we present our attempt to increase the power of exploratory search interfaces by using ideas of social search, i.e., leveraging information left by past users of information systems. Social search technologies are highly popular nowadays, especially for improving ranking. However, current approaches to social ranking do not allow users to decide to what extent social information should be taken into account for result ranking. This paper presents an interface that integrates social search functionality into an exploratory search system in a user-controlled way that is consistent with the nature of exploratory search. The interface incorporates control features that allow the user to (i) express information needs by selecting keywords and (ii) to express preferences for incorporating social wisdom based on tag matching and user similarity. The interface promotes search transparency through color-coded stacked bars and rich tooltips. In an online study investigating system accuracy and subjective aspects with a structural model we found that, when users actively interacted with all its control features, the hybrid system outperformed a baseline content-based-only tool and users were more satisfied

    Explaining recommendations in an interactive hybrid social recommender

    Get PDF
    Hybrid social recommender systems use social relevance from multiple sources to recommend relevant items or people to users. To make hybrid recommendations more transparent and controllable, several researchers have explored interactive hybrid recommender interfaces, which allow for a user-driven fusion of recommendation sources. In this field of work, the intelligent user interface has been investigated as an approach to increase transparency and improve the user experience. In this paper, we attempt to further promote the transparency of recommendations by augmenting an interactive hybrid recommender interface with several types of explanations. We evaluate user behavior patterns and subjective feedback by a within-subject study (N=33). Results from the evaluation show the effectiveness of the proposed explanation models. The result of post-treatment survey indicates a significant improvement in the perception of explainability, but such improvement comes with a lower degree of perceived controllability

    Evaluating Visual Explanations for Similarity-Based Recommendations: User Perception and Performance

    Get PDF
    Recommender system helps users to reduce information overload. In recent years, enhancing explainability in recommender systems has drawn more and more attention in the field of Human-Computer Interaction (HCI). However, it is not clear whether a user-preferred explanation interface can maintain the same level of performance while the users are exploring or comparing the recommendations. In this paper, we introduced a participatory process of designing explanation interfaces with multiple explanatory goals for three similarity-based recommendation models. We investigate the relations of user perception and performance with two user studies. In the first study (N=15), we conducted card-sorting and semi-interview to identify the user preferred interfaces. In the second study (N=18), we carry out a performance-focused evaluation of six explanation interfaces. The result suggests that the user-preferred interface may not guarantee the same level of performance

    Exploring and Promoting Diagnostic Transparency and Explainability in Online Symptom Checkers

    Get PDF
    Online symptom checkers (OSC) are widely used intelligent systems in health contexts such as primary care, remote healthcare, and epidemic control. OSCs use algorithms such as machine learning to facilitate self-diagnosis and triage based on symptoms input by healthcare consumers. However, intelligent systems’ lack of transparency and comprehensibility could lead to unintended consequences such as misleading users, especially in high-stakes areas such as healthcare. In this paper, we attempt to enhance diagnostic transparency by augmenting OSCs with explanations. We first conducted an interview study (N=25) to specify user needs for explanations from users of existing OSCs. Then, we designed a COVID-19 OSC that was enhanced with three types of explanations. Our lab-controlled user study (N=20) found that explanations can significantly improve user experience in multiple aspects. We discuss how explanations are interwoven into conversation flow and present implications for future OSC designs

    The effects of controllability and explainability in a social recommender system

    Get PDF
    In recent years, researchers in the field of recommender systems have explored a range of advanced interfaces to improve user interactions with recommender systems. Some of the major research ideas explored in this new area include the explainability and controllability of recommendations. Controllability enables end users to participate in the recommendation process by providing various kinds of input. Explainability focuses on making the recommendation process and the reasons behind specific recommendation more clear to the users. While each of these approaches contributes to making traditional “black-box” recommendation more attractive and acceptable to end users, little is known about how these approaches work together. In this paper, we investigate the effects of adding user control and visual explanations in a specific context of an interactive hybrid social recommender system. We present Relevance Tuner+, a hybrid recommender system that allows the users to control the fusion of multiple recommender sources while also offering explanations of both the fusion process and each of the source recommendations. We also report the results of a controlled study (N = 50) that explores the impact of controllability and explainability in this context

    Navigation-by-preference: A new conversational recommender with preference-based feedback

    Get PDF
    We present Navigation-by-Preference, n-by-p, a new conversational recommender that uses what the literature calls preference-based feedback. Given a seed item, the recommender helps the user navigate through item space to find an item that aligns with her long-term preferences (revealed by her user profile) but also satisfies her ephemeral, short-term preferences (revealed by the feedback she gives during the dialog). Different from previous work on preference-based feedback, n-by-p does not assume structured item descriptions (such as sets of attribute-value pairs) but works instead in the case of unstructured item descriptions (such as sets of keywords or tags), thus extending preference-based feedback to new domains where structured item descriptions are not available. Different too is that it can be configured to ignore long-term preferences or to take them into account, to work only on positive feedback or to also use negative feedback, and to take previous rounds of feedback into account or to use just the most recent feedback. We use an offline experiment with simulated users to compare 60 configurations of n-by-p. We find that a configuration that includes long-term preferences, that uses both positive and negative feedback, and that uses previous rounds of feedback is the one with highest hit-rate. It also obtains the best survey responses and lowest measures of effort in a trial with real users that we conducted with a web-based system. Notable too is that the user trial has a novel protocol for experimenting with short-term preferences
    corecore