604 research outputs found

    Interaction design guidelines on critiquing-based recommender systems

    Get PDF
    A critiquing-based recommender system acts like an artificial salesperson. It engages users in a conversational dialog where users can provide feedback in the form of critiques to the sample items that were shown to them. The feedback, in turn, enables the system to refine its understanding of the user's preferences and prediction of what the user truly wants. The system is then able to recommend products that may better stimulate the user's interest in the next interaction cycle. In this paper, we report our extensive investigation of comparing various approaches in devising critiquing opportunities designed in these recommender systems. More specifically, we have investigated two major design elements which are necessary for a critiquing-based recommender system: critiquing coverageā€”one vs. multiple items that are returned during each recommendation cycle to be critiqued; and critiquing aidā€”system-suggested critiques (i.e., a set of critique suggestions for users to select) vs. user-initiated critiquing facility (i.e., facilitating users to create critiques on their own). Through a series of three user trials, we have measured how real-users reacted to systems with varied setups of the two elements. In particular, it was found that giving users the choice of critiquing one of multiple items (as opposed to just one) has significantly positive impacts on increasing users' decision accuracy (particularly in the first recommendation cycle) and saving their objective effort (in the later critiquing cycles). As for critiquing aids, the hybrid design with both system-suggested critiques and user-initiated critiquing support exhibits the best performance in inspiring users' decision confidence and increasing their intention to return, in comparison with the uncombined exclusive approaches. Therefore, the results from our studies shed light on the design guidelines for determining the sweetspot balancing user initiative and system support in the development of an effective and user-centric critiquing-based recommender syste

    Towards Question-based Recommender Systems

    Get PDF
    Conversational and question-based recommender systems have gained increasing attention in recent years, with users enabled to converse with the system and better control recommendations. Nevertheless, research in the field is still limited, compared to traditional recommender systems. In this work, we propose a novel Question-based recommendation method, Qrec, to assist users to find items interactively, by answering automatically constructed and algorithmically chosen questions. Previous conversational recommender systems ask users to express their preferences over items or item facets. Our model, instead, asks users to express their preferences over descriptive item features. The model is first trained offline by a novel matrix factorization algorithm, and then iteratively updates the user and item latent factors online by a closed-form solution based on the user answers. Meanwhile, our model infers the underlying user belief and preferences over items to learn an optimal question-asking strategy by using Generalized Binary Search, so as to ask a sequence of questions to the user. Our experimental results demonstrate that our proposed matrix factorization model outperforms the traditional Probabilistic Matrix Factorization model. Further, our proposed Qrec model can greatly improve the performance of state-of-the-art baselines, and it is also effective in the case of cold-start user and item recommendations.Comment: accepted by SIGIR 202

    Evaluating recommender systems from the user's perspective: survey of the state of the art

    Get PDF
    A recommender system is a Web technology that proactively suggests items of interest to users based on their objective behavior or explicitly stated preferences. Evaluations of recommender systems (RS) have traditionally focused on the performance of algorithms. However, many researchers have recently started investigating system effectiveness and evaluation criteria from users' perspectives. In this paper, we survey the state of the art of user experience research in RS by examining how researchers have evaluated design methods that augment RS's ability to help users find the information or product that they truly prefer, interact with ease with the system, and form trust with RS through system transparency, control and privacy preserving mechanisms finally, we examine how these system design features influence users' adoption of the technology. We summarize existing work concerning three crucial interaction activities between the user and the system: the initial preference elicitation process, the preference refinement process, and the presentation of the system's recommendation results. Additionally, we will also cover recent evaluation frameworks that measure a recommender system's overall perceptive qualities and how these qualities influence users' behavioral intentions. The key results are summarized in a set of design guidelines that can provide useful suggestions to scholars and practitioners concerning the design and development of effective recommender systems. The survey also lays groundwork for researchers to pursue future topics that have not been covered by existing method

    Data-driven decision making in Critique-based recommenders: from a critique to social media data

    Full text link
    In the last decade there have been a large number of proposals in the field of Critique-based Recommenders. Critique-based recommenders are data-driven in their nature sincethey use a conversational cyclical recommendation process to elicit user feedback. In theliterature, the proposals made differ mainly in two aspects: in the source of data and in howthis data is analyzed to extract knowledge for providing users with recommendations. Inthis paper, we propose new algorithms that address these two aspects. Firstly, we propose anew algorithm, called HOR, which integrates several data sources, such as current user pref-erences (i.e., a critique), product descriptions, previous critiquing sessions by other users,and users' opinions expressed as ratings on social media web sites. Secondly, we propose adding compatibility and weighting scores to turn user behavior into knowledge to HOR and a previous state-of-the-art approach named HGR to help both algorithms make smarter recommendations. We have evaluated our proposals in two ways: with a simulator and withreal users. A comparison of our proposals with state-of-the-art approaches shows that thenew recommendation algorithms significantly outperform previous ones

    A Cognitively Inspired Clustering Approach for Critique-Based Recommenders

    Full text link
    The purpose of recommender systems is to support humans in the purchasing decision-making process. Decision-making is a human activity based on cognitive information. In the field of recommender systems, critiquing has been widely applied as an effective approach for obtaining users' feedback on recommended products. In the last decade, there have been a large number of proposals in the field of critique-based recommenders. These proposals mainly differ in two aspects: in the source of data and in how it is mined to provide the user with recommendations. To date, no approach has mined data using an adaptive clustering algorithm to increase the recommender's performance. In this paper, we describe how we added a clustering process to a critique-based recommender, thereby adapting the recommendation process and how we defined a cognitive user preference model based on the preferences (i.e., defined by critiques) received by the user. We have developed several proposals based on clustering, whose acronyms are MCP, CUM, CUM-I, and HGR-CUM-I. We compare our proposals with two well-known state-of-the-art approaches: incremental critiquing (IC) and history-guided recommendation (HGR). The results of our experiments showed that using clustering in a critique-based recommender leads to an improvement in their recommendation efficiency, since all the proposals outperform the baseline IC algorithm. Moreover, the performance of the best proposal, HGR-CUM-I, is significantly superior to both the IC and HGR algorithms. Our results indicate that introducing clustering into the critique-based recommender is an appealing option since it enhances overall efficiency, especially with a large data set

    The Evaluation of a Hybrid Critiquing System with Preference-based Recommendations Organization

    Get PDF
    The critiquing-based recommender system mainly aims to guide users to make an accurate and confident decision, while requiring them to consume a low level of effort. We have previously found that the hybrid critiquing system of combining the strengths from both system-proposed critiques and user self-motivated critiquing facility can highly improve users ā€™ subjective perceptions such as their decision confidence and trusting intentions. In this paper, we continue to investigate how to further reduce users ā€™ objective decision effort (e.g. time consumption) in such system by increasing the critique prediction accuracy of the system-proposed critiques. By means of real user evaluation, we proved that a new hybrid critiquing system design that integrates the preferencebased recommendations organization technique for critiques suggestion can effectively help to increase the proposed critiquesā€™ application frequency and significantly contribute to saving usersā€™ task time and interaction effort

    Evaluating product search and recommender systems for E-commerce environments

    Get PDF
    Online systems that help users select the most preferential item from a large electronic catalog are known as product search and recommender systems. Evaluation of various proposed technologies is essential for further development in this area. This paper describes the design and implementation of two user studies in which a particular product search tool, known as example critiquing, was evaluated against a chosen baseline model. The results confirm that example critiquing significantly reduces users' task time and error rate while increasing decision accuracy. Additionally, the results of the second user study show that a particular implementation of example critiquing also made users more confident about their choices. The main contribution is that through these two user studies, an evaluation framework of three criteria was successfully identified, which can be used for evaluating general product search and recommender systems in E-commerce environments. These two experiments and the actual procedures also shed light on some of the most important issues which need to be considered for evaluating such tools, such as the preparation of materials for evaluation, user task design, the context of evaluation, the criteria, the measures and the methodology of result analyse

    Evaluating Conversational Recommender Systems: A Landscape of Research

    Full text link
    Conversational recommender systems aim to interactively support online users in their information search and decision-making processes in an intuitive way. With the latest advances in voice-controlled devices, natural language processing, and AI in general, such systems received increased attention in recent years. Technically, conversational recommenders are usually complex multi-component applications and often consist of multiple machine learning models and a natural language user interface. Evaluating such a complex system in a holistic way can therefore be challenging, as it requires (i) the assessment of the quality of the different learning components, and (ii) the quality perception of the system as a whole by users. Thus, a mixed methods approach is often required, which may combine objective (computational) and subjective (perception-oriented) evaluation techniques. In this paper, we review common evaluation approaches for conversational recommender systems, identify possible limitations, and outline future directions towards more holistic evaluation practices
    • ā€¦
    corecore