227 research outputs found

    Interaction design guidelines on critiquing-based recommender systems

    Get PDF
    A critiquing-based recommender system acts like an artificial salesperson. It engages users in a conversational dialog where users can provide feedback in the form of critiques to the sample items that were shown to them. The feedback, in turn, enables the system to refine its understanding of the user's preferences and prediction of what the user truly wants. The system is then able to recommend products that may better stimulate the user's interest in the next interaction cycle. In this paper, we report our extensive investigation of comparing various approaches in devising critiquing opportunities designed in these recommender systems. More specifically, we have investigated two major design elements which are necessary for a critiquing-based recommender system: critiquing coverage—one vs. multiple items that are returned during each recommendation cycle to be critiqued; and critiquing aid—system-suggested critiques (i.e., a set of critique suggestions for users to select) vs. user-initiated critiquing facility (i.e., facilitating users to create critiques on their own). Through a series of three user trials, we have measured how real-users reacted to systems with varied setups of the two elements. In particular, it was found that giving users the choice of critiquing one of multiple items (as opposed to just one) has significantly positive impacts on increasing users' decision accuracy (particularly in the first recommendation cycle) and saving their objective effort (in the later critiquing cycles). As for critiquing aids, the hybrid design with both system-suggested critiques and user-initiated critiquing support exhibits the best performance in inspiring users' decision confidence and increasing their intention to return, in comparison with the uncombined exclusive approaches. Therefore, the results from our studies shed light on the design guidelines for determining the sweetspot balancing user initiative and system support in the development of an effective and user-centric critiquing-based recommender syste

    An investigation on the impact of natural language on conversational recommendations

    Get PDF
    In this paper, we investigate the combination of Virtual Assistants and Conversational Recommender Systems (CoRSs) by designing and implementing a framework named ConveRSE, for building chatbots that can recommend items from different domains and interact with the user through natural language. An user experiment was carried out to understand how natural language influences both the cost of interaction and recommendation accuracy of a CoRS. Experimental results show that natural language can indeed improve user experience, but some critical aspects of the interaction should be mitigated appropriately

    Data-driven decision making in Critique-based recommenders: from a critique to social media data

    Full text link
    In the last decade there have been a large number of proposals in the field of Critique-based Recommenders. Critique-based recommenders are data-driven in their nature sincethey use a conversational cyclical recommendation process to elicit user feedback. In theliterature, the proposals made differ mainly in two aspects: in the source of data and in howthis data is analyzed to extract knowledge for providing users with recommendations. Inthis paper, we propose new algorithms that address these two aspects. Firstly, we propose anew algorithm, called HOR, which integrates several data sources, such as current user pref-erences (i.e., a critique), product descriptions, previous critiquing sessions by other users,and users' opinions expressed as ratings on social media web sites. Secondly, we propose adding compatibility and weighting scores to turn user behavior into knowledge to HOR and a previous state-of-the-art approach named HGR to help both algorithms make smarter recommendations. We have evaluated our proposals in two ways: with a simulator and withreal users. A comparison of our proposals with state-of-the-art approaches shows that thenew recommendation algorithms significantly outperform previous ones

    A Cognitively Inspired Clustering Approach for Critique-Based Recommenders

    Full text link
    The purpose of recommender systems is to support humans in the purchasing decision-making process. Decision-making is a human activity based on cognitive information. In the field of recommender systems, critiquing has been widely applied as an effective approach for obtaining users' feedback on recommended products. In the last decade, there have been a large number of proposals in the field of critique-based recommenders. These proposals mainly differ in two aspects: in the source of data and in how it is mined to provide the user with recommendations. To date, no approach has mined data using an adaptive clustering algorithm to increase the recommender's performance. In this paper, we describe how we added a clustering process to a critique-based recommender, thereby adapting the recommendation process and how we defined a cognitive user preference model based on the preferences (i.e., defined by critiques) received by the user. We have developed several proposals based on clustering, whose acronyms are MCP, CUM, CUM-I, and HGR-CUM-I. We compare our proposals with two well-known state-of-the-art approaches: incremental critiquing (IC) and history-guided recommendation (HGR). The results of our experiments showed that using clustering in a critique-based recommender leads to an improvement in their recommendation efficiency, since all the proposals outperform the baseline IC algorithm. Moreover, the performance of the best proposal, HGR-CUM-I, is significantly superior to both the IC and HGR algorithms. Our results indicate that introducing clustering into the critique-based recommender is an appealing option since it enhances overall efficiency, especially with a large data set

    Evaluating Conversational Recommender Systems: A Landscape of Research

    Full text link
    Conversational recommender systems aim to interactively support online users in their information search and decision-making processes in an intuitive way. With the latest advances in voice-controlled devices, natural language processing, and AI in general, such systems received increased attention in recent years. Technically, conversational recommenders are usually complex multi-component applications and often consist of multiple machine learning models and a natural language user interface. Evaluating such a complex system in a holistic way can therefore be challenging, as it requires (i) the assessment of the quality of the different learning components, and (ii) the quality perception of the system as a whole by users. Thus, a mixed methods approach is often required, which may combine objective (computational) and subjective (perception-oriented) evaluation techniques. In this paper, we review common evaluation approaches for conversational recommender systems, identify possible limitations, and outline future directions towards more holistic evaluation practices

    A Visual Interface for Critiquing-based Recommender Systems

    Get PDF
    Critiquing-based recommender systems provide an efficient way for users to navigate through complex product spaces even if they are not familiar with the domain details in e-commerce environments. While recent research has mainly concentrated on methods for generating high quality compound critiques, to date there has been a lack of comprehensive investigation on the interface design issues. Traditionally the interface is textual, which shows compound critiques in plain text and may not be easily understood. In this paper we propose a new visual interface which represents various critiques by a set of meaningful icons. Results from our real-user evaluation show that the visual interface can improve the performance of critique-based recommenders by attracting users to apply the compound critiques more frequently and reducing users' interaction effort substantially when the product domain is complex. Users' subjective feedback also shows that the visual interface is highly promising in enhancing users' shopping experience

    Evaluating product search and recommender systems for E-commerce environments

    Get PDF
    Online systems that help users select the most preferential item from a large electronic catalog are known as product search and recommender systems. Evaluation of various proposed technologies is essential for further development in this area. This paper describes the design and implementation of two user studies in which a particular product search tool, known as example critiquing, was evaluated against a chosen baseline model. The results confirm that example critiquing significantly reduces users' task time and error rate while increasing decision accuracy. Additionally, the results of the second user study show that a particular implementation of example critiquing also made users more confident about their choices. The main contribution is that through these two user studies, an evaluation framework of three criteria was successfully identified, which can be used for evaluating general product search and recommender systems in E-commerce environments. These two experiments and the actual procedures also shed light on some of the most important issues which need to be considered for evaluating such tools, such as the preparation of materials for evaluation, user task design, the context of evaluation, the criteria, the measures and the methodology of result analyse

    Critiquing Recommenders for Public Taste Products

    Get PDF
    Critiquing-based recommenders do not require users to state all of their preferences upfront or rate a set of previously experienced products. Compared to other types of recommenders, they require relatively little user effort, especially initially, despite potential accuracy problems. On the other hand, they rely on a set of critiques to elicit users feedback in order to improve accuracy. Thus the better the critiques are, the more accurately and efficiently the system becomes in generating its recommendations. This method has been successfully applied to high-involvement products. However, it was never tested on public taste products such as music, films, perfumes, fashion goods or wine. Indeed our initial trial adapting traditional critiquing methods to this new domain led to unsatisfactory results. This has motivated us to develop a novel approach named "editorial picked critiques" (EPC) that accounts for users’ needs for popularity information, editorial suggestions, as well as their needs for personalization and diversity. Through an empirical study, we demonstrate that EPC presents a viable recommender approach and is superior on several dimensions to critiques generated by data mining methods

    User decision improvement and trust building in product recommender systems

    Get PDF
    As online stores are offering an almost unlimited shelf space, users must increasingly rely on product search and recommender systems to find their most preferred products and decide which item is the truly best one to buy. However, much research work has emphasized on developing and improving the underlying algorithms whereas many of the user issues such as preference elicitation and trust formation received little attention. In this thesis, we aim at designing and evaluating various decision technologies, with emphases on how to improve users' decision accuracy with intelligent preference elicitation and revision tools, and how to build their competence-inspired subjective constructs via trustworthy recommender interfaces. Specifically, two primary technologies are proposed: one is called example critiquing agents aimed to stimulate users to conduct tradeoff navigation and freely specify feedback criteria to example products; another termed as preference-based organization interfaces designed to take two roles: explaining to users why and how the recommendations are computed and displayed, and suggesting critique suggestions to guide users to understand existing tradeoff potentials and to make concrete decision navigations from the top candidate for better choices. To evaluate the two technologies' true performance and benefits to real-users, an evaluation framework was first established, that includes important assessment standards such as the objective/subjective accuracy-effort measures and trust-related subjective aspects (e.g., competence perceptions and behavioral intentions). Based on the evaluation framework, a series of nine experiments has been conducted and most of them were participated by real-users. Three user studies focused on the example critiquing (EC) agent, which first identified the significant impact of tradeoff process with the help of EC on users' decision accuracy improvement, and then in depth explored the advantage of multi-item strategy (for critiquing coverage) against single-item display, and higher user-control level reflected by EC in supporting users to freely compose critiquing criteria for both simple and complex tradeoffs. Another three experiments studied the preference-based organization technique. Regarding its explanation role, a carefully conducted user survey and a significant-scale quantitative evaluation both demonstrated that it can be likely to increase users' competence perception and return intention, and reduce their cognitive effort in information searching, relative to the traditional "why" explanation method in ranked list views. In addition, a retrospective simulation revealed its superior algorithm accuracy in predicting critiques and product choices that real-users intended to make, in comparison with other typical critiquing generation approaches. Motivated by the empirically findings in terms of the two technologies' respective strengths, a hybrid system has been developed with the purpose of combining them into a single application. The final three experiments evaluated its two design versions and particularly validated the hybrid system's universal effectiveness among people from different types of cultural backgrounds: oriental culture and western culture. In the end, a set of design guidelines is derived from all of the experimental results. They should be helpful for the development of a preference-based recommender system, making it capable of practically benefiting its users in improving decision accuracy, expending effort they are willing to invest, and even promoting trust in the system with resulting behavioral intentions to purchase chosen products and return to the system for repeated uses
    • …
    corecore