10,361 research outputs found

    Interaction design guidelines on critiquing-based recommender systems

    Get PDF
    A critiquing-based recommender system acts like an artificial salesperson. It engages users in a conversational dialog where users can provide feedback in the form of critiques to the sample items that were shown to them. The feedback, in turn, enables the system to refine its understanding of the user's preferences and prediction of what the user truly wants. The system is then able to recommend products that may better stimulate the user's interest in the next interaction cycle. In this paper, we report our extensive investigation of comparing various approaches in devising critiquing opportunities designed in these recommender systems. More specifically, we have investigated two major design elements which are necessary for a critiquing-based recommender system: critiquing coverageā€”one vs. multiple items that are returned during each recommendation cycle to be critiqued; and critiquing aidā€”system-suggested critiques (i.e., a set of critique suggestions for users to select) vs. user-initiated critiquing facility (i.e., facilitating users to create critiques on their own). Through a series of three user trials, we have measured how real-users reacted to systems with varied setups of the two elements. In particular, it was found that giving users the choice of critiquing one of multiple items (as opposed to just one) has significantly positive impacts on increasing users' decision accuracy (particularly in the first recommendation cycle) and saving their objective effort (in the later critiquing cycles). As for critiquing aids, the hybrid design with both system-suggested critiques and user-initiated critiquing support exhibits the best performance in inspiring users' decision confidence and increasing their intention to return, in comparison with the uncombined exclusive approaches. Therefore, the results from our studies shed light on the design guidelines for determining the sweetspot balancing user initiative and system support in the development of an effective and user-centric critiquing-based recommender syste

    Evaluating product search and recommender systems for E-commerce environments

    Get PDF
    Online systems that help users select the most preferential item from a large electronic catalog are known as product search and recommender systems. Evaluation of various proposed technologies is essential for further development in this area. This paper describes the design and implementation of two user studies in which a particular product search tool, known as example critiquing, was evaluated against a chosen baseline model. The results confirm that example critiquing significantly reduces users' task time and error rate while increasing decision accuracy. Additionally, the results of the second user study show that a particular implementation of example critiquing also made users more confident about their choices. The main contribution is that through these two user studies, an evaluation framework of three criteria was successfully identified, which can be used for evaluating general product search and recommender systems in E-commerce environments. These two experiments and the actual procedures also shed light on some of the most important issues which need to be considered for evaluating such tools, such as the preparation of materials for evaluation, user task design, the context of evaluation, the criteria, the measures and the methodology of result analyse

    A Cognitively Inspired Clustering Approach for Critique-Based Recommenders

    Full text link
    The purpose of recommender systems is to support humans in the purchasing decision-making process. Decision-making is a human activity based on cognitive information. In the field of recommender systems, critiquing has been widely applied as an effective approach for obtaining users' feedback on recommended products. In the last decade, there have been a large number of proposals in the field of critique-based recommenders. These proposals mainly differ in two aspects: in the source of data and in how it is mined to provide the user with recommendations. To date, no approach has mined data using an adaptive clustering algorithm to increase the recommender's performance. In this paper, we describe how we added a clustering process to a critique-based recommender, thereby adapting the recommendation process and how we defined a cognitive user preference model based on the preferences (i.e., defined by critiques) received by the user. We have developed several proposals based on clustering, whose acronyms are MCP, CUM, CUM-I, and HGR-CUM-I. We compare our proposals with two well-known state-of-the-art approaches: incremental critiquing (IC) and history-guided recommendation (HGR). The results of our experiments showed that using clustering in a critique-based recommender leads to an improvement in their recommendation efficiency, since all the proposals outperform the baseline IC algorithm. Moreover, the performance of the best proposal, HGR-CUM-I, is significantly superior to both the IC and HGR algorithms. Our results indicate that introducing clustering into the critique-based recommender is an appealing option since it enhances overall efficiency, especially with a large data set

    Data-driven decision making in Critique-based recommenders: from a critique to social media data

    Full text link
    In the last decade there have been a large number of proposals in the field of Critique-based Recommenders. Critique-based recommenders are data-driven in their nature sincethey use a conversational cyclical recommendation process to elicit user feedback. In theliterature, the proposals made differ mainly in two aspects: in the source of data and in howthis data is analyzed to extract knowledge for providing users with recommendations. Inthis paper, we propose new algorithms that address these two aspects. Firstly, we propose anew algorithm, called HOR, which integrates several data sources, such as current user pref-erences (i.e., a critique), product descriptions, previous critiquing sessions by other users,and users' opinions expressed as ratings on social media web sites. Secondly, we propose adding compatibility and weighting scores to turn user behavior into knowledge to HOR and a previous state-of-the-art approach named HGR to help both algorithms make smarter recommendations. We have evaluated our proposals in two ways: with a simulator and withreal users. A comparison of our proposals with state-of-the-art approaches shows that thenew recommendation algorithms significantly outperform previous ones

    Preference-based Search using Example-Critiquing with Suggestions

    Get PDF
    We consider interactive tools that help users search for their most preferred item in a large collection of options. In particular, we examine example-critiquing, a technique for enabling users to incrementally construct preference models by critiquing example options that are presented to them. We present novel techniques for improving the example-critiquing technology by adding suggestions to its displayed options. Such suggestions are calculated based on an analysis of users current preference model and their potential hidden preferences. We evaluate the performance of our model-based suggestion techniques with both synthetic and real users. Results show that such suggestions are highly attractive to users and can stimulate them to express more preferences to improve the chance of identifying their most preferred item by up to 78%

    Evaluating recommender systems from the user's perspective: survey of the state of the art

    Get PDF
    A recommender system is a Web technology that proactively suggests items of interest to users based on their objective behavior or explicitly stated preferences. Evaluations of recommender systems (RS) have traditionally focused on the performance of algorithms. However, many researchers have recently started investigating system effectiveness and evaluation criteria from users' perspectives. In this paper, we survey the state of the art of user experience research in RS by examining how researchers have evaluated design methods that augment RS's ability to help users find the information or product that they truly prefer, interact with ease with the system, and form trust with RS through system transparency, control and privacy preserving mechanisms finally, we examine how these system design features influence users' adoption of the technology. We summarize existing work concerning three crucial interaction activities between the user and the system: the initial preference elicitation process, the preference refinement process, and the presentation of the system's recommendation results. Additionally, we will also cover recent evaluation frameworks that measure a recommender system's overall perceptive qualities and how these qualities influence users' behavioral intentions. The key results are summarized in a set of design guidelines that can provide useful suggestions to scholars and practitioners concerning the design and development of effective recommender systems. The survey also lays groundwork for researchers to pursue future topics that have not been covered by existing method

    Critiquing: Effective Decision Support in Time-Critical Domains (Dissertation Proposal)

    Get PDF
    The effective communication of information is an important concern in the design of an expert consultation system. Several researchers have chosen to adopt a critiquing mode, in which the system evaluates and reacts to a solution proposed by the user rather than presenting its own solution. In this proposal, I present an architecture for a critiquing system that functions in real-time, during the process of developing and executing a management plan in time-critical situations. The architecture is able to take account of and reason about multiple, interacting goals and to identify critical errors in the proposed management plan. This architecture is being implemented as part of the TraumAID system for the management of patients with severe injuries
    • ā€¦
    corecore