41 research outputs found
Explaining Social Recommendations to Casual Users: Design Principles and Opportunities
Recommender systems have become popular in recent years, and ordinary users are more likely to rely on such service when completing various daily tasks. The need to design and build explainable recommender interfaces is increasing rapidly. Most of the designs of such explanations are intended to reflect the underlying algorithms by which the recommendations are computed. These approaches have been shown to be useful for obtaining system transparency and trust. However, little is known about how to design explanation interfaces for causal (non-expert) users to achieve different explanatory goals. As a first step toward understanding the user interface design factors, we conducted an international (across 13 countries) online survey of 14 active users of a social recommender system. This study captures user feedback in the field and frames it in terms of design principles and opportunities
The effects of transparency on perceived and actual competence of a content-based recommender.
Perceptions of a system’s competence influence acceptance
of that system [31]. Ideally, users’ perception of
competence matches the actual competence of a system.
This paper investigates the relation between actual and
perceived competence of transparent Semantic Web
recommender systems that explain recommendations in
terms of shared item concepts. We report an experiment
comparing non-transparent and transparent versions of a
content-based recommender. Results indicate that in the
transparent condition, perceived competence and actual
competence (in specific recall) were related, while in the
non-transparent condition they were not. Providing insight
in what aspects of items triggered their recommendation, by
showing the concepts that were the basis for a
recommendation, gave users a better assessment of how
well the system worked
Recommenders' Influence on Buyers' Decision Process
Online stores offer an increasingly large set of products. Interactive decision aids are becoming indispensable tools assisting users as they search for an ideal product to purchase. For an e-commerce website, adopting the correct tools can affect its survival: effective product recommender tools are increasingly recognized by online stores as effective means to sell more products; on the other hand, sites that do not employ intelligent tools will not only see poor purchase volumes but also experience less traffic because consumers are more likely to return to a site employing recommender systems. This paper presents ongoing research in understanding the impact of various decision aids on users' interaction behaviors and their subjective perceptions of these aids. In the current experiment, we employed an eye tracker in an in-depth user study to understand the influence of recommenders on how users select items for the basket set. We collected more than 20,300 fixation data points in 3,648 areas of interest. Our studies show that while users still rely on product filtering tools, the use of recommenders is becoming more prominent in helping them construct the basket set and is monotonically increasing as time goes on
Evaluating Visual Explanations for Similarity-Based Recommendations: User Perception and Performance
Recommender system helps users to reduce information overload. In recent years, enhancing explainability in recommender systems has drawn more and more attention in the field of Human-Computer Interaction (HCI). However, it is not clear whether a user-preferred explanation interface can maintain the same level of performance while the users are exploring or comparing the recommendations. In this paper, we introduced a participatory process of designing explanation interfaces with multiple explanatory goals for three similarity-based recommendation models. We investigate the relations of user perception and performance with two user studies. In the first study (N=15), we conducted card-sorting and semi-interview to identify the user preferred interfaces. In the second study (N=18), we carry out a performance-focused evaluation of six explanation interfaces. The result suggests that the user-preferred interface may not guarantee the same level of performance
Soul of a new machine:Self-learning algorithms in public administration
Big data sets in conjunction with self-learning algorithms are becoming increasingly important in public administration. A growing body of literature demonstrates that the use of such technologies poses fundamental questions about the way in which predictions are generated, and the extent to which such predictions may be used in policy making. Complementing other recent works, the goal of this article is to open the machine’s black box to understand and critically examine how self-learning algorithms gain agency by transforming raw data into policy recommendations that are then used by policy makers. I identify five major concerns and discuss the implications for policy making
Evaluación de interfaces de explicación en los sistemas de recomendación
Explaining interfaces become a useful tool in systems that have a lot of content to evaluate by users. The different interfaces represent a help for the undecided users or those who consider systems as boxed black smart. These systems present recommendations to users based on different learning models. In this paper, we present the different objectives of the explanation interfaces and some of the criteria that you can evaluate, as well as a proposal of metrics to obtain results in the experiments. Finally, we showed the main results of a study with real users and their interaction with e-commerce systems. Among the main results, highlight the positive impact in relation to the time of interaction with the applications and acceptance of the recommendations received.Las interfaces de explicación se tornan una herramienta útil para los sistemas con una alta cantidad de contenido a ser evaluado por los usuarios. Las diferentes interfaces representan una ayuda para los usuarios indecisos o aquellos que consideran los sistemas una caja cerrada inteligente. Estos sistemas muestran recomendaciones a los usuarios basados en diferentes modelos. En el presente trabajo se presentan los diferentes objetivos perseguidos con las interfaces y algunos de los criterios que pudieran ser analizados, así como una propuesta de métricas para registrar resultados. Se muestran finalmente los principales resultados de un estudio con usuarios reales y su interacción con sistemas de uso cotidiano. Dentro de las principales conclusiones destaca el impacto positivo en relación al tiempo de interacción con los aplicativos y la aceptación de las recomendaciones recibidas
Evaluating the effectiveness of explanations for recommender systems : Methodological issues and empirical studies on the impact of personalization
Peer reviewedPostprin
Incorporating reliability measurements into the predictions of a recommender system
In this paper we introduce the idea of using a reliability measure associated to the predic- tions made by recommender systems based on collaborative filtering. This reliability mea- sure is based on the usual notion that the more reliable a prediction, the less liable to be wrong. Here we will define a general reliability measure suitable for any arbitrary recom- mender system. We will also show a method for obtaining specific reliability measures specially fitting the needs of different specific recommender systems
A collaborative filtering approach to mitigate the new user cold start problem.
The new user cold start issue represents a serious problem in recommender systems as it can lead to the loss of new users who decide to stop using the system due to the lack of accuracy in the recommenda- tions received in that first stage in which they have not yet cast a significant number of votes with which to feed the recommender system?s collaborative filtering core. For this reason it is particularly important to design new similarity metrics which provide greater precision in the results offered to users who have cast few votes. This paper presents a new similarity measure perfected using optimization based on neu- ral learning, which exceeds the best results obtained with current metrics. The metric has been tested on the Netflix and Movielens databases, obtaining important improvements in the measures of accuracy, precision and recall when applied to new user cold start situations. The paper includes the mathematical formalization describing how to obtain the main quality measures of a recommender system using leave- one-out cross validation